00:00:00.001 Started by upstream project "autotest-per-patch" build number 126212 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.103 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.103 The recommended git tool is: git 00:00:00.104 using credential 00000000-0000-0000-0000-000000000002 00:00:00.106 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.143 Fetching changes from the remote Git repository 00:00:00.145 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.181 Using shallow fetch with depth 1 00:00:00.181 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.181 > git --version # timeout=10 00:00:00.207 > git --version # 'git version 2.39.2' 00:00:00.207 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.231 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.231 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.226 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.242 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.255 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:04.255 > git config core.sparsecheckout # timeout=10 00:00:04.268 > git read-tree -mu HEAD # timeout=10 00:00:04.286 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:04.311 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:04.312 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:04.427 [Pipeline] Start of Pipeline 00:00:04.442 [Pipeline] library 00:00:04.443 Loading library shm_lib@master 00:00:04.443 Library shm_lib@master is cached. Copying from home. 00:00:04.460 [Pipeline] node 00:00:04.486 Running on VM-host-SM16 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:04.488 [Pipeline] { 00:00:04.502 [Pipeline] catchError 00:00:04.504 [Pipeline] { 00:00:04.519 [Pipeline] wrap 00:00:04.533 [Pipeline] { 00:00:04.545 [Pipeline] stage 00:00:04.548 [Pipeline] { (Prologue) 00:00:04.571 [Pipeline] echo 00:00:04.572 Node: VM-host-SM16 00:00:04.578 [Pipeline] cleanWs 00:00:04.585 [WS-CLEANUP] Deleting project workspace... 00:00:04.585 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.590 [WS-CLEANUP] done 00:00:04.784 [Pipeline] setCustomBuildProperty 00:00:04.854 [Pipeline] httpRequest 00:00:04.871 [Pipeline] echo 00:00:04.873 Sorcerer 10.211.164.101 is alive 00:00:04.882 [Pipeline] httpRequest 00:00:04.886 HttpMethod: GET 00:00:04.887 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.887 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.888 Response Code: HTTP/1.1 200 OK 00:00:04.888 Success: Status code 200 is in the accepted range: 200,404 00:00:04.889 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.925 [Pipeline] sh 00:00:06.205 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.215 [Pipeline] httpRequest 00:00:06.228 [Pipeline] echo 00:00:06.229 Sorcerer 10.211.164.101 is alive 00:00:06.236 [Pipeline] httpRequest 00:00:06.240 HttpMethod: GET 00:00:06.240 URL: http://10.211.164.101/packages/spdk_2f3522da79be0b4b631aa0ad68765970a588003a.tar.gz 00:00:06.241 Sending request to url: http://10.211.164.101/packages/spdk_2f3522da79be0b4b631aa0ad68765970a588003a.tar.gz 00:00:06.250 Response Code: HTTP/1.1 200 OK 00:00:06.251 Success: Status code 200 is in the accepted range: 200,404 00:00:06.251 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_2f3522da79be0b4b631aa0ad68765970a588003a.tar.gz 00:00:35.372 [Pipeline] sh 00:00:35.645 + tar --no-same-owner -xf spdk_2f3522da79be0b4b631aa0ad68765970a588003a.tar.gz 00:00:38.934 [Pipeline] sh 00:00:39.359 + git -C spdk log --oneline -n5 00:00:39.359 2f3522da7 nvmf: move register nvmf_poll_group_poll interrupt to nvmf 00:00:39.359 ef59a6f4b nvmf/tcp: replace pending_buf_queue with nvmf_tcp_request_get_buffers 00:00:39.359 a26f69189 nvmf: enable iobuf based queuing for nvmf requests 00:00:39.359 24034319f nvmf/tcp: use sock group polling for the listening sockets 00:00:39.359 245333351 nvmf/tcp: add transport field to the spdk_nvmf_tcp_port struct 00:00:39.380 [Pipeline] writeFile 00:00:39.394 [Pipeline] sh 00:00:39.696 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:39.706 [Pipeline] sh 00:00:39.983 + cat autorun-spdk.conf 00:00:39.983 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:39.983 SPDK_TEST_NVMF=1 00:00:39.983 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:39.983 SPDK_TEST_USDT=1 00:00:39.983 SPDK_TEST_NVMF_MDNS=1 00:00:39.983 SPDK_RUN_UBSAN=1 00:00:39.983 NET_TYPE=virt 00:00:39.983 SPDK_JSONRPC_GO_CLIENT=1 00:00:39.983 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:39.990 RUN_NIGHTLY=0 00:00:39.992 [Pipeline] } 00:00:40.014 [Pipeline] // stage 00:00:40.031 [Pipeline] stage 00:00:40.033 [Pipeline] { (Run VM) 00:00:40.046 [Pipeline] sh 00:00:40.323 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:40.323 + echo 'Start stage prepare_nvme.sh' 00:00:40.323 Start stage prepare_nvme.sh 00:00:40.323 + [[ -n 0 ]] 00:00:40.323 + disk_prefix=ex0 00:00:40.323 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:00:40.323 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:00:40.323 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:00:40.323 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:40.323 ++ SPDK_TEST_NVMF=1 00:00:40.323 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:40.323 ++ SPDK_TEST_USDT=1 00:00:40.323 ++ SPDK_TEST_NVMF_MDNS=1 00:00:40.323 ++ SPDK_RUN_UBSAN=1 00:00:40.323 ++ NET_TYPE=virt 00:00:40.323 ++ SPDK_JSONRPC_GO_CLIENT=1 00:00:40.323 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:40.323 ++ RUN_NIGHTLY=0 00:00:40.323 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:40.323 + nvme_files=() 00:00:40.323 + declare -A nvme_files 00:00:40.323 + backend_dir=/var/lib/libvirt/images/backends 00:00:40.323 + nvme_files['nvme.img']=5G 00:00:40.323 + nvme_files['nvme-cmb.img']=5G 00:00:40.323 + nvme_files['nvme-multi0.img']=4G 00:00:40.323 + nvme_files['nvme-multi1.img']=4G 00:00:40.323 + nvme_files['nvme-multi2.img']=4G 00:00:40.323 + nvme_files['nvme-openstack.img']=8G 00:00:40.323 + nvme_files['nvme-zns.img']=5G 00:00:40.324 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:40.324 + (( SPDK_TEST_FTL == 1 )) 00:00:40.324 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:40.324 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:40.324 + for nvme in "${!nvme_files[@]}" 00:00:40.324 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:00:40.324 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:40.324 + for nvme in "${!nvme_files[@]}" 00:00:40.324 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:00:40.891 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:40.891 + for nvme in "${!nvme_files[@]}" 00:00:40.891 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:00:41.149 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:41.149 + for nvme in "${!nvme_files[@]}" 00:00:41.149 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:00:41.149 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:41.149 + for nvme in "${!nvme_files[@]}" 00:00:41.149 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:00:41.149 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:41.149 + for nvme in "${!nvme_files[@]}" 00:00:41.149 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:00:41.149 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:41.149 + for nvme in "${!nvme_files[@]}" 00:00:41.149 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:00:42.085 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:42.085 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:00:42.085 + echo 'End stage prepare_nvme.sh' 00:00:42.085 End stage prepare_nvme.sh 00:00:42.097 [Pipeline] sh 00:00:42.376 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:42.376 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora38 00:00:42.376 00:00:42.376 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:00:42.376 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:00:42.376 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:42.376 HELP=0 00:00:42.376 DRY_RUN=0 00:00:42.376 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:00:42.376 NVME_DISKS_TYPE=nvme,nvme, 00:00:42.376 NVME_AUTO_CREATE=0 00:00:42.376 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:00:42.376 NVME_CMB=,, 00:00:42.376 NVME_PMR=,, 00:00:42.376 NVME_ZNS=,, 00:00:42.376 NVME_MS=,, 00:00:42.376 NVME_FDP=,, 00:00:42.376 SPDK_VAGRANT_DISTRO=fedora38 00:00:42.376 SPDK_VAGRANT_VMCPU=10 00:00:42.376 SPDK_VAGRANT_VMRAM=12288 00:00:42.376 SPDK_VAGRANT_PROVIDER=libvirt 00:00:42.376 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:42.376 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:42.376 SPDK_OPENSTACK_NETWORK=0 00:00:42.376 VAGRANT_PACKAGE_BOX=0 00:00:42.376 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:42.376 FORCE_DISTRO=true 00:00:42.376 VAGRANT_BOX_VERSION= 00:00:42.376 EXTRA_VAGRANTFILES= 00:00:42.376 NIC_MODEL=e1000 00:00:42.376 00:00:42.376 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt' 00:00:42.376 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:45.661 Bringing machine 'default' up with 'libvirt' provider... 00:00:45.661 ==> default: Creating image (snapshot of base box volume). 00:00:45.920 ==> default: Creating domain with the following settings... 00:00:45.920 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721058399_ab7c49ea976ef0747812 00:00:45.920 ==> default: -- Domain type: kvm 00:00:45.920 ==> default: -- Cpus: 10 00:00:45.920 ==> default: -- Feature: acpi 00:00:45.920 ==> default: -- Feature: apic 00:00:45.920 ==> default: -- Feature: pae 00:00:45.920 ==> default: -- Memory: 12288M 00:00:45.920 ==> default: -- Memory Backing: hugepages: 00:00:45.920 ==> default: -- Management MAC: 00:00:45.920 ==> default: -- Loader: 00:00:45.920 ==> default: -- Nvram: 00:00:45.920 ==> default: -- Base box: spdk/fedora38 00:00:45.920 ==> default: -- Storage pool: default 00:00:45.920 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721058399_ab7c49ea976ef0747812.img (20G) 00:00:45.920 ==> default: -- Volume Cache: default 00:00:45.920 ==> default: -- Kernel: 00:00:45.920 ==> default: -- Initrd: 00:00:45.920 ==> default: -- Graphics Type: vnc 00:00:45.920 ==> default: -- Graphics Port: -1 00:00:45.920 ==> default: -- Graphics IP: 127.0.0.1 00:00:45.920 ==> default: -- Graphics Password: Not defined 00:00:45.920 ==> default: -- Video Type: cirrus 00:00:45.920 ==> default: -- Video VRAM: 9216 00:00:45.920 ==> default: -- Sound Type: 00:00:45.920 ==> default: -- Keymap: en-us 00:00:45.920 ==> default: -- TPM Path: 00:00:45.920 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:45.920 ==> default: -- Command line args: 00:00:45.920 ==> default: -> value=-device, 00:00:45.920 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:45.920 ==> default: -> value=-drive, 00:00:45.920 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:00:45.920 ==> default: -> value=-device, 00:00:45.920 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:45.920 ==> default: -> value=-device, 00:00:45.920 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:45.920 ==> default: -> value=-drive, 00:00:45.920 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:45.920 ==> default: -> value=-device, 00:00:45.920 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:45.920 ==> default: -> value=-drive, 00:00:45.920 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:45.920 ==> default: -> value=-device, 00:00:45.920 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:45.920 ==> default: -> value=-drive, 00:00:45.920 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:45.920 ==> default: -> value=-device, 00:00:45.920 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:45.920 ==> default: Creating shared folders metadata... 00:00:45.920 ==> default: Starting domain. 00:00:47.822 ==> default: Waiting for domain to get an IP address... 00:01:05.897 ==> default: Waiting for SSH to become available... 00:01:05.897 ==> default: Configuring and enabling network interfaces... 00:01:11.163 default: SSH address: 192.168.121.114:22 00:01:11.163 default: SSH username: vagrant 00:01:11.163 default: SSH auth method: private key 00:01:12.538 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:20.653 ==> default: Mounting SSHFS shared folder... 00:01:22.028 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:22.028 ==> default: Checking Mount.. 00:01:22.963 ==> default: Folder Successfully Mounted! 00:01:22.963 ==> default: Running provisioner: file... 00:01:23.898 default: ~/.gitconfig => .gitconfig 00:01:24.464 00:01:24.464 SUCCESS! 00:01:24.465 00:01:24.465 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:24.465 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:24.465 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:24.465 00:01:24.477 [Pipeline] } 00:01:24.496 [Pipeline] // stage 00:01:24.503 [Pipeline] dir 00:01:24.504 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt 00:01:24.505 [Pipeline] { 00:01:24.517 [Pipeline] catchError 00:01:24.518 [Pipeline] { 00:01:24.527 [Pipeline] sh 00:01:24.800 + vagrant ssh-config --host vagrant 00:01:24.800 + sed -ne /^Host/,$p 00:01:24.800 + tee ssh_conf 00:01:28.076 Host vagrant 00:01:28.076 HostName 192.168.121.114 00:01:28.076 User vagrant 00:01:28.076 Port 22 00:01:28.076 UserKnownHostsFile /dev/null 00:01:28.076 StrictHostKeyChecking no 00:01:28.076 PasswordAuthentication no 00:01:28.076 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:28.076 IdentitiesOnly yes 00:01:28.076 LogLevel FATAL 00:01:28.076 ForwardAgent yes 00:01:28.076 ForwardX11 yes 00:01:28.076 00:01:28.090 [Pipeline] withEnv 00:01:28.093 [Pipeline] { 00:01:28.107 [Pipeline] sh 00:01:28.386 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:28.386 source /etc/os-release 00:01:28.386 [[ -e /image.version ]] && img=$(< /image.version) 00:01:28.386 # Minimal, systemd-like check. 00:01:28.386 if [[ -e /.dockerenv ]]; then 00:01:28.386 # Clear garbage from the node's name: 00:01:28.386 # agt-er_autotest_547-896 -> autotest_547-896 00:01:28.386 # $HOSTNAME is the actual container id 00:01:28.386 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:28.386 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:28.386 # We can assume this is a mount from a host where container is running, 00:01:28.386 # so fetch its hostname to easily identify the target swarm worker. 00:01:28.386 container="$(< /etc/hostname) ($agent)" 00:01:28.386 else 00:01:28.386 # Fallback 00:01:28.386 container=$agent 00:01:28.386 fi 00:01:28.386 fi 00:01:28.386 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:28.386 00:01:28.655 [Pipeline] } 00:01:28.677 [Pipeline] // withEnv 00:01:28.686 [Pipeline] setCustomBuildProperty 00:01:28.703 [Pipeline] stage 00:01:28.705 [Pipeline] { (Tests) 00:01:28.726 [Pipeline] sh 00:01:29.001 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:29.275 [Pipeline] sh 00:01:29.554 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:29.827 [Pipeline] timeout 00:01:29.827 Timeout set to expire in 40 min 00:01:29.829 [Pipeline] { 00:01:29.846 [Pipeline] sh 00:01:30.125 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:30.692 HEAD is now at 2f3522da7 nvmf: move register nvmf_poll_group_poll interrupt to nvmf 00:01:30.706 [Pipeline] sh 00:01:30.985 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:31.256 [Pipeline] sh 00:01:31.533 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:31.806 [Pipeline] sh 00:01:32.085 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:01:32.344 ++ readlink -f spdk_repo 00:01:32.344 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:32.344 + [[ -n /home/vagrant/spdk_repo ]] 00:01:32.344 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:32.344 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:32.344 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:32.344 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:32.344 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:32.344 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:01:32.344 + cd /home/vagrant/spdk_repo 00:01:32.344 + source /etc/os-release 00:01:32.344 ++ NAME='Fedora Linux' 00:01:32.344 ++ VERSION='38 (Cloud Edition)' 00:01:32.344 ++ ID=fedora 00:01:32.344 ++ VERSION_ID=38 00:01:32.344 ++ VERSION_CODENAME= 00:01:32.344 ++ PLATFORM_ID=platform:f38 00:01:32.344 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:32.344 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:32.344 ++ LOGO=fedora-logo-icon 00:01:32.344 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:32.344 ++ HOME_URL=https://fedoraproject.org/ 00:01:32.344 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:32.344 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:32.344 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:32.344 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:32.344 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:32.344 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:32.344 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:32.344 ++ SUPPORT_END=2024-05-14 00:01:32.344 ++ VARIANT='Cloud Edition' 00:01:32.344 ++ VARIANT_ID=cloud 00:01:32.344 + uname -a 00:01:32.344 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:32.344 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:32.624 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:32.624 Hugepages 00:01:32.624 node hugesize free / total 00:01:32.624 node0 1048576kB 0 / 0 00:01:32.624 node0 2048kB 0 / 0 00:01:32.624 00:01:32.624 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:32.882 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:32.882 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:32.882 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:32.882 + rm -f /tmp/spdk-ld-path 00:01:32.882 + source autorun-spdk.conf 00:01:32.882 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.882 ++ SPDK_TEST_NVMF=1 00:01:32.882 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:32.882 ++ SPDK_TEST_USDT=1 00:01:32.882 ++ SPDK_TEST_NVMF_MDNS=1 00:01:32.882 ++ SPDK_RUN_UBSAN=1 00:01:32.882 ++ NET_TYPE=virt 00:01:32.882 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:32.882 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:32.882 ++ RUN_NIGHTLY=0 00:01:32.882 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:32.882 + [[ -n '' ]] 00:01:32.882 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:32.882 + for M in /var/spdk/build-*-manifest.txt 00:01:32.882 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:32.882 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:32.882 + for M in /var/spdk/build-*-manifest.txt 00:01:32.882 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:32.882 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:32.882 ++ uname 00:01:32.882 + [[ Linux == \L\i\n\u\x ]] 00:01:32.882 + sudo dmesg -T 00:01:32.882 + sudo dmesg --clear 00:01:32.882 + dmesg_pid=5260 00:01:32.882 + [[ Fedora Linux == FreeBSD ]] 00:01:32.882 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:32.882 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:32.882 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:32.882 + [[ -x /usr/src/fio-static/fio ]] 00:01:32.882 + sudo dmesg -Tw 00:01:32.882 + export FIO_BIN=/usr/src/fio-static/fio 00:01:32.882 + FIO_BIN=/usr/src/fio-static/fio 00:01:32.882 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:32.882 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:32.882 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:32.882 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:32.883 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:32.883 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:32.883 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:32.883 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:32.883 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:32.883 Test configuration: 00:01:32.883 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.883 SPDK_TEST_NVMF=1 00:01:32.883 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:32.883 SPDK_TEST_USDT=1 00:01:32.883 SPDK_TEST_NVMF_MDNS=1 00:01:32.883 SPDK_RUN_UBSAN=1 00:01:32.883 NET_TYPE=virt 00:01:32.883 SPDK_JSONRPC_GO_CLIENT=1 00:01:32.883 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:33.141 RUN_NIGHTLY=0 15:47:26 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:33.141 15:47:26 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:33.141 15:47:26 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:33.141 15:47:26 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:33.141 15:47:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:33.141 15:47:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:33.141 15:47:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:33.141 15:47:26 -- paths/export.sh@5 -- $ export PATH 00:01:33.141 15:47:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:33.141 15:47:26 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:33.141 15:47:26 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:33.141 15:47:26 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721058446.XXXXXX 00:01:33.141 15:47:26 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721058446.klV9cy 00:01:33.141 15:47:26 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:33.141 15:47:26 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:33.141 15:47:26 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:33.141 15:47:26 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:33.141 15:47:26 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:33.141 15:47:26 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:33.141 15:47:26 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:33.141 15:47:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:33.141 15:47:26 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:01:33.141 15:47:26 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:33.141 15:47:26 -- pm/common@17 -- $ local monitor 00:01:33.141 15:47:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:33.141 15:47:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:33.141 15:47:26 -- pm/common@25 -- $ sleep 1 00:01:33.141 15:47:26 -- pm/common@21 -- $ date +%s 00:01:33.141 15:47:26 -- pm/common@21 -- $ date +%s 00:01:33.141 15:47:26 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721058446 00:01:33.141 15:47:26 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721058446 00:01:33.141 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721058446_collect-vmstat.pm.log 00:01:33.141 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721058446_collect-cpu-load.pm.log 00:01:34.076 15:47:27 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:34.076 15:47:27 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:34.076 15:47:27 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:34.076 15:47:27 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:34.076 15:47:27 -- spdk/autobuild.sh@16 -- $ date -u 00:01:34.076 Mon Jul 15 03:47:27 PM UTC 2024 00:01:34.076 15:47:27 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:34.076 v24.09-pre-212-g2f3522da7 00:01:34.076 15:47:27 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:34.076 15:47:27 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:34.076 15:47:27 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:34.076 15:47:27 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:34.076 15:47:27 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:34.076 15:47:27 -- common/autotest_common.sh@10 -- $ set +x 00:01:34.076 ************************************ 00:01:34.076 START TEST ubsan 00:01:34.076 ************************************ 00:01:34.076 using ubsan 00:01:34.076 15:47:27 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:34.076 00:01:34.076 real 0m0.000s 00:01:34.076 user 0m0.000s 00:01:34.076 sys 0m0.000s 00:01:34.076 15:47:27 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:34.076 15:47:27 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:34.076 ************************************ 00:01:34.076 END TEST ubsan 00:01:34.076 ************************************ 00:01:34.076 15:47:27 -- common/autotest_common.sh@1142 -- $ return 0 00:01:34.076 15:47:27 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:34.076 15:47:27 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:34.076 15:47:27 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:34.076 15:47:27 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:34.076 15:47:27 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:34.076 15:47:27 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:34.076 15:47:27 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:34.076 15:47:27 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:34.076 15:47:27 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:01:34.334 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:34.334 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:34.591 Using 'verbs' RDMA provider 00:01:50.448 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:02.648 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:02.648 go version go1.21.1 linux/amd64 00:02:02.648 Creating mk/config.mk...done. 00:02:02.648 Creating mk/cc.flags.mk...done. 00:02:02.648 Type 'make' to build. 00:02:02.648 15:47:56 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:02.648 15:47:56 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:02.648 15:47:56 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:02.648 15:47:56 -- common/autotest_common.sh@10 -- $ set +x 00:02:02.648 ************************************ 00:02:02.648 START TEST make 00:02:02.648 ************************************ 00:02:02.648 15:47:56 make -- common/autotest_common.sh@1123 -- $ make -j10 00:02:02.907 make[1]: Nothing to be done for 'all'. 00:02:15.149 The Meson build system 00:02:15.149 Version: 1.3.1 00:02:15.149 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:15.149 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:15.149 Build type: native build 00:02:15.149 Program cat found: YES (/usr/bin/cat) 00:02:15.149 Project name: DPDK 00:02:15.149 Project version: 24.03.0 00:02:15.149 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:15.149 C linker for the host machine: cc ld.bfd 2.39-16 00:02:15.149 Host machine cpu family: x86_64 00:02:15.149 Host machine cpu: x86_64 00:02:15.149 Message: ## Building in Developer Mode ## 00:02:15.149 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:15.149 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:15.149 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:15.149 Program python3 found: YES (/usr/bin/python3) 00:02:15.149 Program cat found: YES (/usr/bin/cat) 00:02:15.149 Compiler for C supports arguments -march=native: YES 00:02:15.149 Checking for size of "void *" : 8 00:02:15.149 Checking for size of "void *" : 8 (cached) 00:02:15.149 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:15.149 Library m found: YES 00:02:15.149 Library numa found: YES 00:02:15.149 Has header "numaif.h" : YES 00:02:15.149 Library fdt found: NO 00:02:15.149 Library execinfo found: NO 00:02:15.149 Has header "execinfo.h" : YES 00:02:15.149 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:15.149 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:15.149 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:15.149 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:15.149 Run-time dependency openssl found: YES 3.0.9 00:02:15.149 Run-time dependency libpcap found: YES 1.10.4 00:02:15.149 Has header "pcap.h" with dependency libpcap: YES 00:02:15.149 Compiler for C supports arguments -Wcast-qual: YES 00:02:15.149 Compiler for C supports arguments -Wdeprecated: YES 00:02:15.149 Compiler for C supports arguments -Wformat: YES 00:02:15.149 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:15.149 Compiler for C supports arguments -Wformat-security: NO 00:02:15.149 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:15.149 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:15.149 Compiler for C supports arguments -Wnested-externs: YES 00:02:15.149 Compiler for C supports arguments -Wold-style-definition: YES 00:02:15.149 Compiler for C supports arguments -Wpointer-arith: YES 00:02:15.149 Compiler for C supports arguments -Wsign-compare: YES 00:02:15.149 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:15.149 Compiler for C supports arguments -Wundef: YES 00:02:15.149 Compiler for C supports arguments -Wwrite-strings: YES 00:02:15.149 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:15.149 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:15.149 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:15.149 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:15.149 Program objdump found: YES (/usr/bin/objdump) 00:02:15.149 Compiler for C supports arguments -mavx512f: YES 00:02:15.149 Checking if "AVX512 checking" compiles: YES 00:02:15.149 Fetching value of define "__SSE4_2__" : 1 00:02:15.149 Fetching value of define "__AES__" : 1 00:02:15.149 Fetching value of define "__AVX__" : 1 00:02:15.149 Fetching value of define "__AVX2__" : 1 00:02:15.149 Fetching value of define "__AVX512BW__" : (undefined) 00:02:15.149 Fetching value of define "__AVX512CD__" : (undefined) 00:02:15.149 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:15.149 Fetching value of define "__AVX512F__" : (undefined) 00:02:15.149 Fetching value of define "__AVX512VL__" : (undefined) 00:02:15.149 Fetching value of define "__PCLMUL__" : 1 00:02:15.149 Fetching value of define "__RDRND__" : 1 00:02:15.149 Fetching value of define "__RDSEED__" : 1 00:02:15.149 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:15.149 Fetching value of define "__znver1__" : (undefined) 00:02:15.149 Fetching value of define "__znver2__" : (undefined) 00:02:15.149 Fetching value of define "__znver3__" : (undefined) 00:02:15.149 Fetching value of define "__znver4__" : (undefined) 00:02:15.149 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:15.149 Message: lib/log: Defining dependency "log" 00:02:15.149 Message: lib/kvargs: Defining dependency "kvargs" 00:02:15.149 Message: lib/telemetry: Defining dependency "telemetry" 00:02:15.149 Checking for function "getentropy" : NO 00:02:15.149 Message: lib/eal: Defining dependency "eal" 00:02:15.149 Message: lib/ring: Defining dependency "ring" 00:02:15.149 Message: lib/rcu: Defining dependency "rcu" 00:02:15.149 Message: lib/mempool: Defining dependency "mempool" 00:02:15.149 Message: lib/mbuf: Defining dependency "mbuf" 00:02:15.149 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:15.149 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:15.149 Compiler for C supports arguments -mpclmul: YES 00:02:15.149 Compiler for C supports arguments -maes: YES 00:02:15.149 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:15.149 Compiler for C supports arguments -mavx512bw: YES 00:02:15.149 Compiler for C supports arguments -mavx512dq: YES 00:02:15.149 Compiler for C supports arguments -mavx512vl: YES 00:02:15.149 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:15.149 Compiler for C supports arguments -mavx2: YES 00:02:15.149 Compiler for C supports arguments -mavx: YES 00:02:15.149 Message: lib/net: Defining dependency "net" 00:02:15.149 Message: lib/meter: Defining dependency "meter" 00:02:15.149 Message: lib/ethdev: Defining dependency "ethdev" 00:02:15.149 Message: lib/pci: Defining dependency "pci" 00:02:15.149 Message: lib/cmdline: Defining dependency "cmdline" 00:02:15.149 Message: lib/hash: Defining dependency "hash" 00:02:15.149 Message: lib/timer: Defining dependency "timer" 00:02:15.149 Message: lib/compressdev: Defining dependency "compressdev" 00:02:15.149 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:15.149 Message: lib/dmadev: Defining dependency "dmadev" 00:02:15.149 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:15.149 Message: lib/power: Defining dependency "power" 00:02:15.149 Message: lib/reorder: Defining dependency "reorder" 00:02:15.149 Message: lib/security: Defining dependency "security" 00:02:15.149 Has header "linux/userfaultfd.h" : YES 00:02:15.149 Has header "linux/vduse.h" : YES 00:02:15.149 Message: lib/vhost: Defining dependency "vhost" 00:02:15.149 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:15.149 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:15.149 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:15.149 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:15.149 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:15.149 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:15.149 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:15.149 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:15.149 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:15.149 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:15.149 Program doxygen found: YES (/usr/bin/doxygen) 00:02:15.149 Configuring doxy-api-html.conf using configuration 00:02:15.149 Configuring doxy-api-man.conf using configuration 00:02:15.149 Program mandb found: YES (/usr/bin/mandb) 00:02:15.149 Program sphinx-build found: NO 00:02:15.149 Configuring rte_build_config.h using configuration 00:02:15.149 Message: 00:02:15.149 ================= 00:02:15.149 Applications Enabled 00:02:15.149 ================= 00:02:15.149 00:02:15.149 apps: 00:02:15.149 00:02:15.149 00:02:15.149 Message: 00:02:15.149 ================= 00:02:15.149 Libraries Enabled 00:02:15.149 ================= 00:02:15.149 00:02:15.149 libs: 00:02:15.149 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:15.149 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:15.149 cryptodev, dmadev, power, reorder, security, vhost, 00:02:15.149 00:02:15.149 Message: 00:02:15.149 =============== 00:02:15.149 Drivers Enabled 00:02:15.149 =============== 00:02:15.149 00:02:15.149 common: 00:02:15.149 00:02:15.149 bus: 00:02:15.149 pci, vdev, 00:02:15.149 mempool: 00:02:15.150 ring, 00:02:15.150 dma: 00:02:15.150 00:02:15.150 net: 00:02:15.150 00:02:15.150 crypto: 00:02:15.150 00:02:15.150 compress: 00:02:15.150 00:02:15.150 vdpa: 00:02:15.150 00:02:15.150 00:02:15.150 Message: 00:02:15.150 ================= 00:02:15.150 Content Skipped 00:02:15.150 ================= 00:02:15.150 00:02:15.150 apps: 00:02:15.150 dumpcap: explicitly disabled via build config 00:02:15.150 graph: explicitly disabled via build config 00:02:15.150 pdump: explicitly disabled via build config 00:02:15.150 proc-info: explicitly disabled via build config 00:02:15.150 test-acl: explicitly disabled via build config 00:02:15.150 test-bbdev: explicitly disabled via build config 00:02:15.150 test-cmdline: explicitly disabled via build config 00:02:15.150 test-compress-perf: explicitly disabled via build config 00:02:15.150 test-crypto-perf: explicitly disabled via build config 00:02:15.150 test-dma-perf: explicitly disabled via build config 00:02:15.150 test-eventdev: explicitly disabled via build config 00:02:15.150 test-fib: explicitly disabled via build config 00:02:15.150 test-flow-perf: explicitly disabled via build config 00:02:15.150 test-gpudev: explicitly disabled via build config 00:02:15.150 test-mldev: explicitly disabled via build config 00:02:15.150 test-pipeline: explicitly disabled via build config 00:02:15.150 test-pmd: explicitly disabled via build config 00:02:15.150 test-regex: explicitly disabled via build config 00:02:15.150 test-sad: explicitly disabled via build config 00:02:15.150 test-security-perf: explicitly disabled via build config 00:02:15.150 00:02:15.150 libs: 00:02:15.150 argparse: explicitly disabled via build config 00:02:15.150 metrics: explicitly disabled via build config 00:02:15.150 acl: explicitly disabled via build config 00:02:15.150 bbdev: explicitly disabled via build config 00:02:15.150 bitratestats: explicitly disabled via build config 00:02:15.150 bpf: explicitly disabled via build config 00:02:15.150 cfgfile: explicitly disabled via build config 00:02:15.150 distributor: explicitly disabled via build config 00:02:15.150 efd: explicitly disabled via build config 00:02:15.150 eventdev: explicitly disabled via build config 00:02:15.150 dispatcher: explicitly disabled via build config 00:02:15.150 gpudev: explicitly disabled via build config 00:02:15.150 gro: explicitly disabled via build config 00:02:15.150 gso: explicitly disabled via build config 00:02:15.150 ip_frag: explicitly disabled via build config 00:02:15.150 jobstats: explicitly disabled via build config 00:02:15.150 latencystats: explicitly disabled via build config 00:02:15.150 lpm: explicitly disabled via build config 00:02:15.150 member: explicitly disabled via build config 00:02:15.150 pcapng: explicitly disabled via build config 00:02:15.150 rawdev: explicitly disabled via build config 00:02:15.150 regexdev: explicitly disabled via build config 00:02:15.150 mldev: explicitly disabled via build config 00:02:15.150 rib: explicitly disabled via build config 00:02:15.150 sched: explicitly disabled via build config 00:02:15.150 stack: explicitly disabled via build config 00:02:15.150 ipsec: explicitly disabled via build config 00:02:15.150 pdcp: explicitly disabled via build config 00:02:15.150 fib: explicitly disabled via build config 00:02:15.150 port: explicitly disabled via build config 00:02:15.150 pdump: explicitly disabled via build config 00:02:15.150 table: explicitly disabled via build config 00:02:15.150 pipeline: explicitly disabled via build config 00:02:15.150 graph: explicitly disabled via build config 00:02:15.150 node: explicitly disabled via build config 00:02:15.150 00:02:15.150 drivers: 00:02:15.150 common/cpt: not in enabled drivers build config 00:02:15.150 common/dpaax: not in enabled drivers build config 00:02:15.150 common/iavf: not in enabled drivers build config 00:02:15.150 common/idpf: not in enabled drivers build config 00:02:15.150 common/ionic: not in enabled drivers build config 00:02:15.150 common/mvep: not in enabled drivers build config 00:02:15.150 common/octeontx: not in enabled drivers build config 00:02:15.150 bus/auxiliary: not in enabled drivers build config 00:02:15.150 bus/cdx: not in enabled drivers build config 00:02:15.150 bus/dpaa: not in enabled drivers build config 00:02:15.150 bus/fslmc: not in enabled drivers build config 00:02:15.150 bus/ifpga: not in enabled drivers build config 00:02:15.150 bus/platform: not in enabled drivers build config 00:02:15.150 bus/uacce: not in enabled drivers build config 00:02:15.150 bus/vmbus: not in enabled drivers build config 00:02:15.150 common/cnxk: not in enabled drivers build config 00:02:15.150 common/mlx5: not in enabled drivers build config 00:02:15.150 common/nfp: not in enabled drivers build config 00:02:15.150 common/nitrox: not in enabled drivers build config 00:02:15.150 common/qat: not in enabled drivers build config 00:02:15.150 common/sfc_efx: not in enabled drivers build config 00:02:15.150 mempool/bucket: not in enabled drivers build config 00:02:15.150 mempool/cnxk: not in enabled drivers build config 00:02:15.150 mempool/dpaa: not in enabled drivers build config 00:02:15.150 mempool/dpaa2: not in enabled drivers build config 00:02:15.150 mempool/octeontx: not in enabled drivers build config 00:02:15.150 mempool/stack: not in enabled drivers build config 00:02:15.150 dma/cnxk: not in enabled drivers build config 00:02:15.150 dma/dpaa: not in enabled drivers build config 00:02:15.150 dma/dpaa2: not in enabled drivers build config 00:02:15.150 dma/hisilicon: not in enabled drivers build config 00:02:15.150 dma/idxd: not in enabled drivers build config 00:02:15.150 dma/ioat: not in enabled drivers build config 00:02:15.150 dma/skeleton: not in enabled drivers build config 00:02:15.150 net/af_packet: not in enabled drivers build config 00:02:15.150 net/af_xdp: not in enabled drivers build config 00:02:15.150 net/ark: not in enabled drivers build config 00:02:15.150 net/atlantic: not in enabled drivers build config 00:02:15.150 net/avp: not in enabled drivers build config 00:02:15.150 net/axgbe: not in enabled drivers build config 00:02:15.150 net/bnx2x: not in enabled drivers build config 00:02:15.150 net/bnxt: not in enabled drivers build config 00:02:15.150 net/bonding: not in enabled drivers build config 00:02:15.150 net/cnxk: not in enabled drivers build config 00:02:15.150 net/cpfl: not in enabled drivers build config 00:02:15.150 net/cxgbe: not in enabled drivers build config 00:02:15.150 net/dpaa: not in enabled drivers build config 00:02:15.150 net/dpaa2: not in enabled drivers build config 00:02:15.150 net/e1000: not in enabled drivers build config 00:02:15.150 net/ena: not in enabled drivers build config 00:02:15.150 net/enetc: not in enabled drivers build config 00:02:15.150 net/enetfec: not in enabled drivers build config 00:02:15.150 net/enic: not in enabled drivers build config 00:02:15.150 net/failsafe: not in enabled drivers build config 00:02:15.150 net/fm10k: not in enabled drivers build config 00:02:15.150 net/gve: not in enabled drivers build config 00:02:15.150 net/hinic: not in enabled drivers build config 00:02:15.150 net/hns3: not in enabled drivers build config 00:02:15.150 net/i40e: not in enabled drivers build config 00:02:15.150 net/iavf: not in enabled drivers build config 00:02:15.150 net/ice: not in enabled drivers build config 00:02:15.150 net/idpf: not in enabled drivers build config 00:02:15.150 net/igc: not in enabled drivers build config 00:02:15.150 net/ionic: not in enabled drivers build config 00:02:15.150 net/ipn3ke: not in enabled drivers build config 00:02:15.150 net/ixgbe: not in enabled drivers build config 00:02:15.150 net/mana: not in enabled drivers build config 00:02:15.150 net/memif: not in enabled drivers build config 00:02:15.150 net/mlx4: not in enabled drivers build config 00:02:15.150 net/mlx5: not in enabled drivers build config 00:02:15.150 net/mvneta: not in enabled drivers build config 00:02:15.150 net/mvpp2: not in enabled drivers build config 00:02:15.150 net/netvsc: not in enabled drivers build config 00:02:15.150 net/nfb: not in enabled drivers build config 00:02:15.150 net/nfp: not in enabled drivers build config 00:02:15.150 net/ngbe: not in enabled drivers build config 00:02:15.150 net/null: not in enabled drivers build config 00:02:15.150 net/octeontx: not in enabled drivers build config 00:02:15.150 net/octeon_ep: not in enabled drivers build config 00:02:15.150 net/pcap: not in enabled drivers build config 00:02:15.150 net/pfe: not in enabled drivers build config 00:02:15.150 net/qede: not in enabled drivers build config 00:02:15.150 net/ring: not in enabled drivers build config 00:02:15.150 net/sfc: not in enabled drivers build config 00:02:15.150 net/softnic: not in enabled drivers build config 00:02:15.150 net/tap: not in enabled drivers build config 00:02:15.150 net/thunderx: not in enabled drivers build config 00:02:15.150 net/txgbe: not in enabled drivers build config 00:02:15.150 net/vdev_netvsc: not in enabled drivers build config 00:02:15.150 net/vhost: not in enabled drivers build config 00:02:15.150 net/virtio: not in enabled drivers build config 00:02:15.150 net/vmxnet3: not in enabled drivers build config 00:02:15.150 raw/*: missing internal dependency, "rawdev" 00:02:15.150 crypto/armv8: not in enabled drivers build config 00:02:15.150 crypto/bcmfs: not in enabled drivers build config 00:02:15.150 crypto/caam_jr: not in enabled drivers build config 00:02:15.150 crypto/ccp: not in enabled drivers build config 00:02:15.150 crypto/cnxk: not in enabled drivers build config 00:02:15.150 crypto/dpaa_sec: not in enabled drivers build config 00:02:15.150 crypto/dpaa2_sec: not in enabled drivers build config 00:02:15.150 crypto/ipsec_mb: not in enabled drivers build config 00:02:15.150 crypto/mlx5: not in enabled drivers build config 00:02:15.150 crypto/mvsam: not in enabled drivers build config 00:02:15.150 crypto/nitrox: not in enabled drivers build config 00:02:15.150 crypto/null: not in enabled drivers build config 00:02:15.150 crypto/octeontx: not in enabled drivers build config 00:02:15.150 crypto/openssl: not in enabled drivers build config 00:02:15.150 crypto/scheduler: not in enabled drivers build config 00:02:15.150 crypto/uadk: not in enabled drivers build config 00:02:15.150 crypto/virtio: not in enabled drivers build config 00:02:15.150 compress/isal: not in enabled drivers build config 00:02:15.150 compress/mlx5: not in enabled drivers build config 00:02:15.150 compress/nitrox: not in enabled drivers build config 00:02:15.150 compress/octeontx: not in enabled drivers build config 00:02:15.150 compress/zlib: not in enabled drivers build config 00:02:15.150 regex/*: missing internal dependency, "regexdev" 00:02:15.150 ml/*: missing internal dependency, "mldev" 00:02:15.150 vdpa/ifc: not in enabled drivers build config 00:02:15.150 vdpa/mlx5: not in enabled drivers build config 00:02:15.150 vdpa/nfp: not in enabled drivers build config 00:02:15.150 vdpa/sfc: not in enabled drivers build config 00:02:15.150 event/*: missing internal dependency, "eventdev" 00:02:15.150 baseband/*: missing internal dependency, "bbdev" 00:02:15.150 gpu/*: missing internal dependency, "gpudev" 00:02:15.150 00:02:15.150 00:02:15.150 Build targets in project: 85 00:02:15.150 00:02:15.150 DPDK 24.03.0 00:02:15.150 00:02:15.150 User defined options 00:02:15.151 buildtype : debug 00:02:15.151 default_library : shared 00:02:15.151 libdir : lib 00:02:15.151 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:15.151 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:15.151 c_link_args : 00:02:15.151 cpu_instruction_set: native 00:02:15.151 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:15.151 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:15.151 enable_docs : false 00:02:15.151 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:15.151 enable_kmods : false 00:02:15.151 max_lcores : 128 00:02:15.151 tests : false 00:02:15.151 00:02:15.151 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:15.151 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:15.151 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:15.151 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:15.151 [3/268] Linking static target lib/librte_kvargs.a 00:02:15.151 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:15.151 [5/268] Linking static target lib/librte_log.a 00:02:15.151 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:15.408 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.665 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:15.665 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:15.666 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:15.666 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:15.923 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:15.923 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:15.923 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:15.923 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:16.181 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:16.181 [17/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.181 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:16.181 [19/268] Linking static target lib/librte_telemetry.a 00:02:16.181 [20/268] Linking target lib/librte_log.so.24.1 00:02:16.440 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:16.440 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:16.698 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:16.698 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:16.698 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:16.698 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:16.698 [27/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:16.698 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:16.956 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:16.956 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:16.956 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:17.215 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.215 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:17.215 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:17.215 [35/268] Linking target lib/librte_telemetry.so.24.1 00:02:17.474 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:17.474 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:17.732 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:17.732 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:17.732 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:17.732 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:17.732 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:17.732 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:17.990 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:17.990 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:17.990 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:18.248 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:18.248 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:18.248 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:18.506 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:18.765 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:18.765 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:18.765 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:18.765 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:18.765 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:19.024 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:19.024 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:19.282 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:19.282 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:19.282 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:19.282 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:19.540 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:19.540 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:19.797 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:19.797 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:20.054 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:20.054 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:20.311 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:20.311 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:20.569 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:20.569 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:20.569 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:20.569 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:20.829 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:20.829 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:20.829 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:20.829 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:20.829 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:21.087 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:21.087 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:21.087 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:21.345 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:21.345 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:21.604 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:21.604 [85/268] Linking static target lib/librte_eal.a 00:02:21.862 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:21.862 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:21.862 [88/268] Linking static target lib/librte_ring.a 00:02:21.862 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:21.862 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:22.121 [91/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:22.121 [92/268] Linking static target lib/librte_rcu.a 00:02:22.121 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:22.121 [94/268] Linking static target lib/librte_mempool.a 00:02:22.379 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:22.379 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:22.379 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.637 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:22.637 [99/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.637 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:22.637 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:23.203 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:23.203 [103/268] Linking static target lib/librte_mbuf.a 00:02:23.461 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:23.461 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:23.461 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:23.461 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:23.461 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:23.461 [109/268] Linking static target lib/librte_net.a 00:02:23.461 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:23.461 [111/268] Linking static target lib/librte_meter.a 00:02:23.720 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.978 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:23.978 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.978 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.236 [116/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.236 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:24.494 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:24.494 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:25.060 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:25.318 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:25.318 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:25.318 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:25.318 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:25.577 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:25.577 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:25.577 [127/268] Linking static target lib/librte_pci.a 00:02:25.577 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:25.577 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:25.836 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:25.836 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:25.836 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:25.836 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:26.094 [134/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.094 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:26.094 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:26.094 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:26.094 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:26.094 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:26.094 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:26.094 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:26.094 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:26.094 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:26.352 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:26.352 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:26.352 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:26.614 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:26.614 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:26.614 [149/268] Linking static target lib/librte_cmdline.a 00:02:26.614 [150/268] Linking static target lib/librte_ethdev.a 00:02:26.614 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:26.880 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:26.880 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:26.880 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:26.880 [155/268] Linking static target lib/librte_timer.a 00:02:27.138 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:27.138 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:27.704 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:27.704 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:27.704 [160/268] Linking static target lib/librte_compressdev.a 00:02:27.704 [161/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.704 [162/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:27.704 [163/268] Linking static target lib/librte_hash.a 00:02:27.704 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:27.704 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:27.962 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:27.962 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:28.219 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:28.219 [169/268] Linking static target lib/librte_dmadev.a 00:02:28.219 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.219 [171/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:28.219 [172/268] Linking static target lib/librte_cryptodev.a 00:02:28.219 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:28.219 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:28.476 [175/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:28.733 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.733 [177/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:28.991 [178/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.991 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:28.991 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:28.991 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.991 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:29.249 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:29.249 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:29.506 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:29.506 [186/268] Linking static target lib/librte_power.a 00:02:29.763 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:29.763 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:29.763 [189/268] Linking static target lib/librte_reorder.a 00:02:29.763 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:30.020 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:30.020 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:30.020 [193/268] Linking static target lib/librte_security.a 00:02:30.278 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:30.278 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.536 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.794 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.795 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:30.795 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:31.053 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:31.053 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.053 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:31.311 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:31.311 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:31.311 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:31.311 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:31.311 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:31.569 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:31.569 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:31.569 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:31.569 [211/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:31.569 [212/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:31.827 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:31.827 [214/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:31.827 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:31.827 [216/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:31.827 [217/268] Linking static target drivers/librte_bus_vdev.a 00:02:31.827 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:31.827 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:31.827 [220/268] Linking static target drivers/librte_bus_pci.a 00:02:32.085 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:32.085 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:32.085 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.344 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:32.344 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:32.344 [226/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:32.344 [227/268] Linking static target drivers/librte_mempool_ring.a 00:02:32.344 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.951 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:32.951 [230/268] Linking static target lib/librte_vhost.a 00:02:33.526 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.784 [232/268] Linking target lib/librte_eal.so.24.1 00:02:33.784 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:34.042 [234/268] Linking target lib/librte_meter.so.24.1 00:02:34.042 [235/268] Linking target lib/librte_ring.so.24.1 00:02:34.042 [236/268] Linking target lib/librte_dmadev.so.24.1 00:02:34.042 [237/268] Linking target lib/librte_pci.so.24.1 00:02:34.042 [238/268] Linking target lib/librte_timer.so.24.1 00:02:34.042 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:34.042 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:34.042 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:34.042 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:34.042 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:34.042 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:34.042 [245/268] Linking target lib/librte_rcu.so.24.1 00:02:34.042 [246/268] Linking target lib/librte_mempool.so.24.1 00:02:34.042 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:34.300 [248/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.300 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:34.301 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:34.301 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:34.301 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:34.559 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:34.559 [254/268] Linking target lib/librte_reorder.so.24.1 00:02:34.559 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:34.559 [256/268] Linking target lib/librte_net.so.24.1 00:02:34.559 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:02:34.559 [258/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.817 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:34.817 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:34.817 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:34.817 [262/268] Linking target lib/librte_security.so.24.1 00:02:34.817 [263/268] Linking target lib/librte_hash.so.24.1 00:02:34.817 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:34.817 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:35.075 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:35.075 [267/268] Linking target lib/librte_power.so.24.1 00:02:35.075 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:35.075 INFO: autodetecting backend as ninja 00:02:35.075 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:36.448 CC lib/log/log.o 00:02:36.448 CC lib/log/log_flags.o 00:02:36.448 CC lib/log/log_deprecated.o 00:02:36.448 CC lib/ut/ut.o 00:02:36.448 CC lib/ut_mock/mock.o 00:02:36.448 LIB libspdk_ut_mock.a 00:02:36.448 LIB libspdk_ut.a 00:02:36.448 SO libspdk_ut_mock.so.6.0 00:02:36.448 LIB libspdk_log.a 00:02:36.448 SO libspdk_ut.so.2.0 00:02:36.706 SO libspdk_log.so.7.0 00:02:36.706 SYMLINK libspdk_ut.so 00:02:36.706 SYMLINK libspdk_ut_mock.so 00:02:36.706 SYMLINK libspdk_log.so 00:02:36.965 CC lib/ioat/ioat.o 00:02:36.965 CC lib/dma/dma.o 00:02:36.965 CC lib/util/bit_array.o 00:02:36.965 CC lib/util/base64.o 00:02:36.965 CXX lib/trace_parser/trace.o 00:02:36.965 CC lib/util/cpuset.o 00:02:36.965 CC lib/util/crc16.o 00:02:36.965 CC lib/util/crc32.o 00:02:36.965 CC lib/util/crc32c.o 00:02:36.965 CC lib/vfio_user/host/vfio_user_pci.o 00:02:36.965 CC lib/util/crc32_ieee.o 00:02:36.965 CC lib/util/crc64.o 00:02:36.965 CC lib/util/dif.o 00:02:37.222 CC lib/util/fd.o 00:02:37.222 CC lib/util/file.o 00:02:37.222 LIB libspdk_dma.a 00:02:37.222 CC lib/util/hexlify.o 00:02:37.222 SO libspdk_dma.so.4.0 00:02:37.222 CC lib/util/iov.o 00:02:37.222 CC lib/util/math.o 00:02:37.222 LIB libspdk_ioat.a 00:02:37.222 SYMLINK libspdk_dma.so 00:02:37.222 CC lib/util/pipe.o 00:02:37.222 SO libspdk_ioat.so.7.0 00:02:37.222 CC lib/vfio_user/host/vfio_user.o 00:02:37.222 CC lib/util/strerror_tls.o 00:02:37.222 CC lib/util/string.o 00:02:37.222 SYMLINK libspdk_ioat.so 00:02:37.481 CC lib/util/uuid.o 00:02:37.481 CC lib/util/fd_group.o 00:02:37.481 CC lib/util/xor.o 00:02:37.481 CC lib/util/zipf.o 00:02:37.481 LIB libspdk_vfio_user.a 00:02:37.481 SO libspdk_vfio_user.so.5.0 00:02:37.738 SYMLINK libspdk_vfio_user.so 00:02:37.738 LIB libspdk_util.a 00:02:37.738 SO libspdk_util.so.9.1 00:02:37.996 SYMLINK libspdk_util.so 00:02:37.996 LIB libspdk_trace_parser.a 00:02:37.996 SO libspdk_trace_parser.so.5.0 00:02:38.253 SYMLINK libspdk_trace_parser.so 00:02:38.254 CC lib/conf/conf.o 00:02:38.254 CC lib/rdma_utils/rdma_utils.o 00:02:38.254 CC lib/vmd/vmd.o 00:02:38.254 CC lib/vmd/led.o 00:02:38.254 CC lib/idxd/idxd.o 00:02:38.254 CC lib/env_dpdk/env.o 00:02:38.254 CC lib/idxd/idxd_user.o 00:02:38.254 CC lib/env_dpdk/memory.o 00:02:38.254 CC lib/json/json_parse.o 00:02:38.254 CC lib/rdma_provider/common.o 00:02:38.254 CC lib/json/json_util.o 00:02:38.512 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:38.512 LIB libspdk_conf.a 00:02:38.512 SO libspdk_conf.so.6.0 00:02:38.512 CC lib/idxd/idxd_kernel.o 00:02:38.512 CC lib/env_dpdk/pci.o 00:02:38.512 SYMLINK libspdk_conf.so 00:02:38.512 CC lib/env_dpdk/init.o 00:02:38.512 LIB libspdk_rdma_utils.a 00:02:38.512 SO libspdk_rdma_utils.so.1.0 00:02:38.512 CC lib/json/json_write.o 00:02:38.512 LIB libspdk_rdma_provider.a 00:02:38.512 SYMLINK libspdk_rdma_utils.so 00:02:38.512 CC lib/env_dpdk/threads.o 00:02:38.512 CC lib/env_dpdk/pci_ioat.o 00:02:38.770 SO libspdk_rdma_provider.so.6.0 00:02:38.770 SYMLINK libspdk_rdma_provider.so 00:02:38.770 CC lib/env_dpdk/pci_virtio.o 00:02:38.770 CC lib/env_dpdk/pci_vmd.o 00:02:38.770 CC lib/env_dpdk/pci_idxd.o 00:02:38.770 CC lib/env_dpdk/pci_event.o 00:02:38.770 CC lib/env_dpdk/sigbus_handler.o 00:02:38.770 CC lib/env_dpdk/pci_dpdk.o 00:02:38.770 LIB libspdk_idxd.a 00:02:38.770 LIB libspdk_vmd.a 00:02:38.770 LIB libspdk_json.a 00:02:39.029 SO libspdk_vmd.so.6.0 00:02:39.029 SO libspdk_idxd.so.12.0 00:02:39.029 SO libspdk_json.so.6.0 00:02:39.029 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:39.029 SYMLINK libspdk_vmd.so 00:02:39.029 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:39.029 SYMLINK libspdk_idxd.so 00:02:39.029 SYMLINK libspdk_json.so 00:02:39.287 CC lib/jsonrpc/jsonrpc_server.o 00:02:39.287 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:39.287 CC lib/jsonrpc/jsonrpc_client.o 00:02:39.287 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:39.544 LIB libspdk_jsonrpc.a 00:02:39.544 LIB libspdk_env_dpdk.a 00:02:39.544 SO libspdk_jsonrpc.so.6.0 00:02:39.811 SO libspdk_env_dpdk.so.14.1 00:02:39.811 SYMLINK libspdk_jsonrpc.so 00:02:39.811 SYMLINK libspdk_env_dpdk.so 00:02:40.094 CC lib/rpc/rpc.o 00:02:40.352 LIB libspdk_rpc.a 00:02:40.352 SO libspdk_rpc.so.6.0 00:02:40.352 SYMLINK libspdk_rpc.so 00:02:40.610 CC lib/notify/notify.o 00:02:40.610 CC lib/notify/notify_rpc.o 00:02:40.610 CC lib/trace/trace.o 00:02:40.610 CC lib/trace/trace_flags.o 00:02:40.610 CC lib/trace/trace_rpc.o 00:02:40.610 CC lib/keyring/keyring.o 00:02:40.610 CC lib/keyring/keyring_rpc.o 00:02:40.868 LIB libspdk_notify.a 00:02:40.868 SO libspdk_notify.so.6.0 00:02:40.868 LIB libspdk_keyring.a 00:02:40.868 LIB libspdk_trace.a 00:02:40.868 SO libspdk_keyring.so.1.0 00:02:40.868 SYMLINK libspdk_notify.so 00:02:40.868 SO libspdk_trace.so.10.0 00:02:41.127 SYMLINK libspdk_keyring.so 00:02:41.127 SYMLINK libspdk_trace.so 00:02:41.385 CC lib/thread/thread.o 00:02:41.385 CC lib/thread/iobuf.o 00:02:41.385 CC lib/sock/sock.o 00:02:41.385 CC lib/sock/sock_rpc.o 00:02:41.953 LIB libspdk_sock.a 00:02:41.953 SO libspdk_sock.so.10.0 00:02:41.953 SYMLINK libspdk_sock.so 00:02:42.212 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:42.212 CC lib/nvme/nvme_fabric.o 00:02:42.212 CC lib/nvme/nvme_ctrlr.o 00:02:42.212 CC lib/nvme/nvme_ns_cmd.o 00:02:42.212 CC lib/nvme/nvme_ns.o 00:02:42.212 CC lib/nvme/nvme_pcie_common.o 00:02:42.212 CC lib/nvme/nvme_pcie.o 00:02:42.212 CC lib/nvme/nvme_qpair.o 00:02:42.212 CC lib/nvme/nvme.o 00:02:43.146 LIB libspdk_thread.a 00:02:43.146 SO libspdk_thread.so.10.1 00:02:43.146 SYMLINK libspdk_thread.so 00:02:43.146 CC lib/nvme/nvme_quirks.o 00:02:43.146 CC lib/nvme/nvme_transport.o 00:02:43.146 CC lib/nvme/nvme_discovery.o 00:02:43.146 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:43.146 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:43.146 CC lib/nvme/nvme_tcp.o 00:02:43.405 CC lib/nvme/nvme_opal.o 00:02:43.405 CC lib/nvme/nvme_io_msg.o 00:02:43.405 CC lib/nvme/nvme_poll_group.o 00:02:43.662 CC lib/nvme/nvme_zns.o 00:02:43.919 CC lib/nvme/nvme_stubs.o 00:02:43.919 CC lib/nvme/nvme_auth.o 00:02:43.919 CC lib/nvme/nvme_cuse.o 00:02:43.919 CC lib/nvme/nvme_rdma.o 00:02:43.919 CC lib/accel/accel.o 00:02:44.177 CC lib/accel/accel_rpc.o 00:02:44.177 CC lib/blob/blobstore.o 00:02:44.177 CC lib/blob/request.o 00:02:44.434 CC lib/blob/zeroes.o 00:02:44.434 CC lib/blob/blob_bs_dev.o 00:02:44.693 CC lib/accel/accel_sw.o 00:02:44.693 CC lib/init/json_config.o 00:02:44.693 CC lib/init/subsystem.o 00:02:44.693 CC lib/virtio/virtio.o 00:02:44.693 CC lib/virtio/virtio_vhost_user.o 00:02:44.950 CC lib/virtio/virtio_vfio_user.o 00:02:44.950 CC lib/init/subsystem_rpc.o 00:02:44.950 CC lib/virtio/virtio_pci.o 00:02:44.950 CC lib/init/rpc.o 00:02:45.212 LIB libspdk_accel.a 00:02:45.212 SO libspdk_accel.so.15.1 00:02:45.212 LIB libspdk_init.a 00:02:45.212 SO libspdk_init.so.5.0 00:02:45.212 SYMLINK libspdk_accel.so 00:02:45.212 LIB libspdk_virtio.a 00:02:45.212 SYMLINK libspdk_init.so 00:02:45.475 SO libspdk_virtio.so.7.0 00:02:45.475 LIB libspdk_nvme.a 00:02:45.475 SYMLINK libspdk_virtio.so 00:02:45.475 CC lib/bdev/bdev.o 00:02:45.475 CC lib/bdev/bdev_zone.o 00:02:45.475 CC lib/bdev/bdev_rpc.o 00:02:45.475 CC lib/bdev/part.o 00:02:45.475 CC lib/bdev/scsi_nvme.o 00:02:45.475 CC lib/event/app.o 00:02:45.475 CC lib/event/reactor.o 00:02:45.475 SO libspdk_nvme.so.13.1 00:02:45.475 CC lib/event/log_rpc.o 00:02:45.732 CC lib/event/app_rpc.o 00:02:45.732 CC lib/event/scheduler_static.o 00:02:45.989 SYMLINK libspdk_nvme.so 00:02:45.989 LIB libspdk_event.a 00:02:46.246 SO libspdk_event.so.14.0 00:02:46.246 SYMLINK libspdk_event.so 00:02:47.615 LIB libspdk_blob.a 00:02:47.615 SO libspdk_blob.so.11.0 00:02:47.615 SYMLINK libspdk_blob.so 00:02:47.872 CC lib/blobfs/blobfs.o 00:02:47.872 CC lib/blobfs/tree.o 00:02:47.872 CC lib/lvol/lvol.o 00:02:48.437 LIB libspdk_bdev.a 00:02:48.437 SO libspdk_bdev.so.15.1 00:02:48.437 SYMLINK libspdk_bdev.so 00:02:48.695 CC lib/nbd/nbd.o 00:02:48.695 CC lib/nbd/nbd_rpc.o 00:02:48.695 CC lib/nvmf/ctrlr.o 00:02:48.695 CC lib/nvmf/ctrlr_discovery.o 00:02:48.695 CC lib/nvmf/ctrlr_bdev.o 00:02:48.695 CC lib/ublk/ublk.o 00:02:48.695 CC lib/scsi/dev.o 00:02:48.695 CC lib/ftl/ftl_core.o 00:02:48.952 LIB libspdk_blobfs.a 00:02:48.952 SO libspdk_blobfs.so.10.0 00:02:48.952 CC lib/ublk/ublk_rpc.o 00:02:48.952 SYMLINK libspdk_blobfs.so 00:02:48.952 CC lib/nvmf/subsystem.o 00:02:48.952 CC lib/scsi/lun.o 00:02:48.952 LIB libspdk_lvol.a 00:02:48.952 SO libspdk_lvol.so.10.0 00:02:49.230 CC lib/nvmf/nvmf.o 00:02:49.230 LIB libspdk_nbd.a 00:02:49.230 SYMLINK libspdk_lvol.so 00:02:49.230 CC lib/ftl/ftl_init.o 00:02:49.230 SO libspdk_nbd.so.7.0 00:02:49.230 CC lib/ftl/ftl_layout.o 00:02:49.230 SYMLINK libspdk_nbd.so 00:02:49.230 CC lib/ftl/ftl_debug.o 00:02:49.230 CC lib/ftl/ftl_io.o 00:02:49.230 CC lib/scsi/port.o 00:02:49.488 LIB libspdk_ublk.a 00:02:49.488 CC lib/scsi/scsi.o 00:02:49.488 SO libspdk_ublk.so.3.0 00:02:49.488 CC lib/ftl/ftl_sb.o 00:02:49.488 SYMLINK libspdk_ublk.so 00:02:49.488 CC lib/scsi/scsi_bdev.o 00:02:49.488 CC lib/scsi/scsi_pr.o 00:02:49.488 CC lib/ftl/ftl_l2p.o 00:02:49.488 CC lib/ftl/ftl_l2p_flat.o 00:02:49.488 CC lib/ftl/ftl_nv_cache.o 00:02:49.488 CC lib/nvmf/nvmf_rpc.o 00:02:49.744 CC lib/ftl/ftl_band.o 00:02:49.744 CC lib/ftl/ftl_band_ops.o 00:02:49.744 CC lib/scsi/scsi_rpc.o 00:02:49.744 CC lib/scsi/task.o 00:02:50.001 CC lib/nvmf/transport.o 00:02:50.001 CC lib/ftl/ftl_writer.o 00:02:50.001 CC lib/ftl/ftl_rq.o 00:02:50.001 LIB libspdk_scsi.a 00:02:50.001 CC lib/nvmf/tcp.o 00:02:50.001 SO libspdk_scsi.so.9.0 00:02:50.001 CC lib/ftl/ftl_reloc.o 00:02:50.258 CC lib/ftl/ftl_l2p_cache.o 00:02:50.258 SYMLINK libspdk_scsi.so 00:02:50.258 CC lib/ftl/ftl_p2l.o 00:02:50.258 CC lib/ftl/mngt/ftl_mngt.o 00:02:50.516 CC lib/nvmf/stubs.o 00:02:50.516 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:50.516 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:50.516 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:50.516 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:50.773 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:50.773 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:50.773 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:50.773 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:50.773 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:50.773 CC lib/nvmf/mdns_server.o 00:02:51.032 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:51.032 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:51.032 CC lib/iscsi/conn.o 00:02:51.032 CC lib/iscsi/init_grp.o 00:02:51.032 CC lib/iscsi/iscsi.o 00:02:51.032 CC lib/iscsi/md5.o 00:02:51.290 CC lib/nvmf/rdma.o 00:02:51.290 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:51.290 CC lib/iscsi/param.o 00:02:51.290 CC lib/vhost/vhost.o 00:02:51.290 CC lib/vhost/vhost_rpc.o 00:02:51.290 CC lib/vhost/vhost_scsi.o 00:02:51.548 CC lib/vhost/vhost_blk.o 00:02:51.548 CC lib/ftl/utils/ftl_conf.o 00:02:51.548 CC lib/ftl/utils/ftl_md.o 00:02:51.548 CC lib/ftl/utils/ftl_mempool.o 00:02:51.548 CC lib/ftl/utils/ftl_bitmap.o 00:02:51.806 CC lib/vhost/rte_vhost_user.o 00:02:51.806 CC lib/ftl/utils/ftl_property.o 00:02:51.806 CC lib/nvmf/auth.o 00:02:52.064 CC lib/iscsi/portal_grp.o 00:02:52.064 CC lib/iscsi/tgt_node.o 00:02:52.064 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:52.322 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:52.322 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:52.322 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:52.580 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:52.580 CC lib/iscsi/iscsi_subsystem.o 00:02:52.580 CC lib/iscsi/iscsi_rpc.o 00:02:52.580 CC lib/iscsi/task.o 00:02:52.580 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:52.580 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:52.580 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:52.580 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:52.580 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:52.838 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:52.838 CC lib/ftl/base/ftl_base_dev.o 00:02:52.838 CC lib/ftl/base/ftl_base_bdev.o 00:02:52.838 CC lib/ftl/ftl_trace.o 00:02:52.838 LIB libspdk_vhost.a 00:02:53.096 SO libspdk_vhost.so.8.0 00:02:53.096 LIB libspdk_iscsi.a 00:02:53.096 SO libspdk_iscsi.so.8.0 00:02:53.096 SYMLINK libspdk_vhost.so 00:02:53.096 LIB libspdk_ftl.a 00:02:53.353 LIB libspdk_nvmf.a 00:02:53.353 SYMLINK libspdk_iscsi.so 00:02:53.354 SO libspdk_ftl.so.9.0 00:02:53.354 SO libspdk_nvmf.so.19.0 00:02:53.611 SYMLINK libspdk_nvmf.so 00:02:53.869 SYMLINK libspdk_ftl.so 00:02:54.127 CC module/env_dpdk/env_dpdk_rpc.o 00:02:54.127 CC module/accel/dsa/accel_dsa.o 00:02:54.127 CC module/sock/posix/posix.o 00:02:54.127 CC module/accel/ioat/accel_ioat.o 00:02:54.127 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:54.127 CC module/accel/error/accel_error.o 00:02:54.127 CC module/keyring/file/keyring.o 00:02:54.127 CC module/scheduler/gscheduler/gscheduler.o 00:02:54.127 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:54.127 CC module/blob/bdev/blob_bdev.o 00:02:54.385 LIB libspdk_env_dpdk_rpc.a 00:02:54.385 SO libspdk_env_dpdk_rpc.so.6.0 00:02:54.385 SYMLINK libspdk_env_dpdk_rpc.so 00:02:54.385 CC module/accel/dsa/accel_dsa_rpc.o 00:02:54.385 LIB libspdk_scheduler_dpdk_governor.a 00:02:54.385 LIB libspdk_scheduler_gscheduler.a 00:02:54.385 CC module/accel/error/accel_error_rpc.o 00:02:54.385 CC module/keyring/file/keyring_rpc.o 00:02:54.385 CC module/accel/ioat/accel_ioat_rpc.o 00:02:54.385 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:54.385 LIB libspdk_scheduler_dynamic.a 00:02:54.385 SO libspdk_scheduler_gscheduler.so.4.0 00:02:54.385 SO libspdk_scheduler_dynamic.so.4.0 00:02:54.385 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:54.643 LIB libspdk_accel_dsa.a 00:02:54.643 SYMLINK libspdk_scheduler_gscheduler.so 00:02:54.643 SYMLINK libspdk_scheduler_dynamic.so 00:02:54.643 LIB libspdk_blob_bdev.a 00:02:54.643 SO libspdk_accel_dsa.so.5.0 00:02:54.643 LIB libspdk_accel_error.a 00:02:54.643 LIB libspdk_accel_ioat.a 00:02:54.643 SO libspdk_blob_bdev.so.11.0 00:02:54.643 LIB libspdk_keyring_file.a 00:02:54.643 SO libspdk_accel_error.so.2.0 00:02:54.643 SO libspdk_accel_ioat.so.6.0 00:02:54.643 SYMLINK libspdk_accel_dsa.so 00:02:54.643 SO libspdk_keyring_file.so.1.0 00:02:54.643 SYMLINK libspdk_blob_bdev.so 00:02:54.643 SYMLINK libspdk_accel_error.so 00:02:54.643 SYMLINK libspdk_accel_ioat.so 00:02:54.643 CC module/accel/iaa/accel_iaa.o 00:02:54.643 CC module/accel/iaa/accel_iaa_rpc.o 00:02:54.643 CC module/keyring/linux/keyring.o 00:02:54.643 SYMLINK libspdk_keyring_file.so 00:02:54.643 CC module/keyring/linux/keyring_rpc.o 00:02:54.901 LIB libspdk_keyring_linux.a 00:02:54.901 LIB libspdk_accel_iaa.a 00:02:54.901 SO libspdk_keyring_linux.so.1.0 00:02:54.901 SO libspdk_accel_iaa.so.3.0 00:02:55.159 CC module/blobfs/bdev/blobfs_bdev.o 00:02:55.159 CC module/bdev/error/vbdev_error.o 00:02:55.159 CC module/bdev/lvol/vbdev_lvol.o 00:02:55.159 CC module/bdev/delay/vbdev_delay.o 00:02:55.159 CC module/bdev/gpt/gpt.o 00:02:55.159 LIB libspdk_sock_posix.a 00:02:55.159 SYMLINK libspdk_keyring_linux.so 00:02:55.159 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:55.159 SYMLINK libspdk_accel_iaa.so 00:02:55.159 CC module/bdev/malloc/bdev_malloc.o 00:02:55.159 SO libspdk_sock_posix.so.6.0 00:02:55.159 CC module/bdev/null/bdev_null.o 00:02:55.159 SYMLINK libspdk_sock_posix.so 00:02:55.416 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:55.416 CC module/bdev/gpt/vbdev_gpt.o 00:02:55.416 CC module/bdev/error/vbdev_error_rpc.o 00:02:55.416 CC module/bdev/nvme/bdev_nvme.o 00:02:55.416 LIB libspdk_blobfs_bdev.a 00:02:55.416 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:55.416 CC module/bdev/null/bdev_null_rpc.o 00:02:55.416 CC module/bdev/passthru/vbdev_passthru.o 00:02:55.416 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:55.416 SO libspdk_blobfs_bdev.so.6.0 00:02:55.673 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:55.673 LIB libspdk_bdev_error.a 00:02:55.673 LIB libspdk_bdev_lvol.a 00:02:55.673 SYMLINK libspdk_blobfs_bdev.so 00:02:55.673 SO libspdk_bdev_error.so.6.0 00:02:55.673 SO libspdk_bdev_lvol.so.6.0 00:02:55.673 LIB libspdk_bdev_gpt.a 00:02:55.673 LIB libspdk_bdev_delay.a 00:02:55.673 LIB libspdk_bdev_null.a 00:02:55.673 SO libspdk_bdev_gpt.so.6.0 00:02:55.673 SYMLINK libspdk_bdev_error.so 00:02:55.673 LIB libspdk_bdev_malloc.a 00:02:55.673 SYMLINK libspdk_bdev_lvol.so 00:02:55.673 SO libspdk_bdev_delay.so.6.0 00:02:55.673 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:55.673 SO libspdk_bdev_null.so.6.0 00:02:55.673 SO libspdk_bdev_malloc.so.6.0 00:02:55.673 SYMLINK libspdk_bdev_gpt.so 00:02:55.931 CC module/bdev/raid/bdev_raid.o 00:02:55.931 SYMLINK libspdk_bdev_delay.so 00:02:55.931 SYMLINK libspdk_bdev_null.so 00:02:55.931 SYMLINK libspdk_bdev_malloc.so 00:02:55.931 CC module/bdev/raid/bdev_raid_rpc.o 00:02:55.931 LIB libspdk_bdev_passthru.a 00:02:55.931 SO libspdk_bdev_passthru.so.6.0 00:02:55.931 CC module/bdev/split/vbdev_split.o 00:02:55.931 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:55.931 SYMLINK libspdk_bdev_passthru.so 00:02:55.931 CC module/bdev/aio/bdev_aio.o 00:02:55.931 CC module/bdev/ftl/bdev_ftl.o 00:02:55.931 CC module/bdev/iscsi/bdev_iscsi.o 00:02:56.188 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:56.188 CC module/bdev/split/vbdev_split_rpc.o 00:02:56.188 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:56.188 CC module/bdev/nvme/nvme_rpc.o 00:02:56.445 LIB libspdk_bdev_split.a 00:02:56.445 LIB libspdk_bdev_ftl.a 00:02:56.445 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:56.445 SO libspdk_bdev_split.so.6.0 00:02:56.445 CC module/bdev/aio/bdev_aio_rpc.o 00:02:56.445 SO libspdk_bdev_ftl.so.6.0 00:02:56.445 CC module/bdev/nvme/bdev_mdns_client.o 00:02:56.445 SYMLINK libspdk_bdev_split.so 00:02:56.445 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:56.445 CC module/bdev/nvme/vbdev_opal.o 00:02:56.445 SYMLINK libspdk_bdev_ftl.so 00:02:56.445 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:56.445 LIB libspdk_bdev_aio.a 00:02:56.445 LIB libspdk_bdev_zone_block.a 00:02:56.703 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:56.703 SO libspdk_bdev_aio.so.6.0 00:02:56.703 SO libspdk_bdev_zone_block.so.6.0 00:02:56.703 LIB libspdk_bdev_iscsi.a 00:02:56.703 SYMLINK libspdk_bdev_zone_block.so 00:02:56.703 SO libspdk_bdev_iscsi.so.6.0 00:02:56.703 SYMLINK libspdk_bdev_aio.so 00:02:56.703 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:56.703 CC module/bdev/raid/bdev_raid_sb.o 00:02:56.703 CC module/bdev/raid/raid0.o 00:02:56.703 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:56.703 CC module/bdev/raid/raid1.o 00:02:56.703 SYMLINK libspdk_bdev_iscsi.so 00:02:56.703 CC module/bdev/raid/concat.o 00:02:56.960 LIB libspdk_bdev_virtio.a 00:02:56.960 LIB libspdk_bdev_raid.a 00:02:56.960 SO libspdk_bdev_virtio.so.6.0 00:02:57.218 SO libspdk_bdev_raid.so.6.0 00:02:57.218 SYMLINK libspdk_bdev_virtio.so 00:02:57.218 SYMLINK libspdk_bdev_raid.so 00:02:57.784 LIB libspdk_bdev_nvme.a 00:02:58.041 SO libspdk_bdev_nvme.so.7.0 00:02:58.041 SYMLINK libspdk_bdev_nvme.so 00:02:58.606 CC module/event/subsystems/scheduler/scheduler.o 00:02:58.606 CC module/event/subsystems/vmd/vmd.o 00:02:58.606 CC module/event/subsystems/sock/sock.o 00:02:58.606 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:58.606 CC module/event/subsystems/keyring/keyring.o 00:02:58.606 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:58.606 CC module/event/subsystems/iobuf/iobuf.o 00:02:58.606 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:58.863 LIB libspdk_event_keyring.a 00:02:58.863 LIB libspdk_event_sock.a 00:02:58.863 LIB libspdk_event_vmd.a 00:02:58.863 LIB libspdk_event_scheduler.a 00:02:58.863 LIB libspdk_event_iobuf.a 00:02:58.863 LIB libspdk_event_vhost_blk.a 00:02:58.863 SO libspdk_event_sock.so.5.0 00:02:58.863 SO libspdk_event_scheduler.so.4.0 00:02:58.863 SO libspdk_event_keyring.so.1.0 00:02:58.863 SO libspdk_event_vmd.so.6.0 00:02:58.863 SO libspdk_event_vhost_blk.so.3.0 00:02:58.863 SO libspdk_event_iobuf.so.3.0 00:02:58.863 SYMLINK libspdk_event_scheduler.so 00:02:58.863 SYMLINK libspdk_event_sock.so 00:02:58.863 SYMLINK libspdk_event_keyring.so 00:02:58.863 SYMLINK libspdk_event_vmd.so 00:02:58.863 SYMLINK libspdk_event_vhost_blk.so 00:02:58.863 SYMLINK libspdk_event_iobuf.so 00:02:59.121 CC module/event/subsystems/accel/accel.o 00:02:59.379 LIB libspdk_event_accel.a 00:02:59.379 SO libspdk_event_accel.so.6.0 00:02:59.379 SYMLINK libspdk_event_accel.so 00:02:59.945 CC module/event/subsystems/bdev/bdev.o 00:02:59.945 LIB libspdk_event_bdev.a 00:02:59.945 SO libspdk_event_bdev.so.6.0 00:03:00.204 SYMLINK libspdk_event_bdev.so 00:03:00.462 CC module/event/subsystems/nbd/nbd.o 00:03:00.462 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:00.462 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:00.462 CC module/event/subsystems/ublk/ublk.o 00:03:00.462 CC module/event/subsystems/scsi/scsi.o 00:03:00.462 LIB libspdk_event_nbd.a 00:03:00.462 LIB libspdk_event_ublk.a 00:03:00.462 SO libspdk_event_nbd.so.6.0 00:03:00.462 LIB libspdk_event_scsi.a 00:03:00.462 SO libspdk_event_ublk.so.3.0 00:03:00.719 SYMLINK libspdk_event_nbd.so 00:03:00.719 SO libspdk_event_scsi.so.6.0 00:03:00.719 LIB libspdk_event_nvmf.a 00:03:00.719 SYMLINK libspdk_event_ublk.so 00:03:00.719 SO libspdk_event_nvmf.so.6.0 00:03:00.719 SYMLINK libspdk_event_scsi.so 00:03:00.719 SYMLINK libspdk_event_nvmf.so 00:03:00.978 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:00.978 CC module/event/subsystems/iscsi/iscsi.o 00:03:01.237 LIB libspdk_event_vhost_scsi.a 00:03:01.237 LIB libspdk_event_iscsi.a 00:03:01.237 SO libspdk_event_vhost_scsi.so.3.0 00:03:01.237 SO libspdk_event_iscsi.so.6.0 00:03:01.237 SYMLINK libspdk_event_vhost_scsi.so 00:03:01.237 SYMLINK libspdk_event_iscsi.so 00:03:01.495 SO libspdk.so.6.0 00:03:01.495 SYMLINK libspdk.so 00:03:01.764 CC app/trace_record/trace_record.o 00:03:01.764 CXX app/trace/trace.o 00:03:01.764 TEST_HEADER include/spdk/accel.h 00:03:01.764 TEST_HEADER include/spdk/accel_module.h 00:03:01.764 TEST_HEADER include/spdk/assert.h 00:03:01.764 TEST_HEADER include/spdk/barrier.h 00:03:01.764 TEST_HEADER include/spdk/base64.h 00:03:01.764 TEST_HEADER include/spdk/bdev.h 00:03:01.764 TEST_HEADER include/spdk/bdev_module.h 00:03:01.764 TEST_HEADER include/spdk/bdev_zone.h 00:03:01.764 TEST_HEADER include/spdk/bit_array.h 00:03:01.764 TEST_HEADER include/spdk/bit_pool.h 00:03:01.764 TEST_HEADER include/spdk/blob_bdev.h 00:03:01.764 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:01.764 TEST_HEADER include/spdk/blobfs.h 00:03:01.764 TEST_HEADER include/spdk/blob.h 00:03:01.764 TEST_HEADER include/spdk/conf.h 00:03:01.764 TEST_HEADER include/spdk/config.h 00:03:01.764 TEST_HEADER include/spdk/cpuset.h 00:03:01.764 TEST_HEADER include/spdk/crc16.h 00:03:01.764 TEST_HEADER include/spdk/crc32.h 00:03:01.764 TEST_HEADER include/spdk/crc64.h 00:03:01.765 TEST_HEADER include/spdk/dif.h 00:03:01.765 TEST_HEADER include/spdk/dma.h 00:03:01.765 CC app/nvmf_tgt/nvmf_main.o 00:03:01.765 TEST_HEADER include/spdk/endian.h 00:03:01.765 TEST_HEADER include/spdk/env_dpdk.h 00:03:01.765 TEST_HEADER include/spdk/env.h 00:03:01.765 TEST_HEADER include/spdk/event.h 00:03:01.765 TEST_HEADER include/spdk/fd_group.h 00:03:01.765 TEST_HEADER include/spdk/fd.h 00:03:01.765 TEST_HEADER include/spdk/file.h 00:03:01.765 TEST_HEADER include/spdk/ftl.h 00:03:01.765 TEST_HEADER include/spdk/gpt_spec.h 00:03:01.765 TEST_HEADER include/spdk/hexlify.h 00:03:01.765 TEST_HEADER include/spdk/histogram_data.h 00:03:01.765 TEST_HEADER include/spdk/idxd.h 00:03:01.765 TEST_HEADER include/spdk/idxd_spec.h 00:03:01.765 TEST_HEADER include/spdk/init.h 00:03:01.765 CC examples/ioat/perf/perf.o 00:03:01.765 TEST_HEADER include/spdk/ioat.h 00:03:01.765 CC examples/util/zipf/zipf.o 00:03:01.765 TEST_HEADER include/spdk/ioat_spec.h 00:03:01.765 TEST_HEADER include/spdk/iscsi_spec.h 00:03:01.765 TEST_HEADER include/spdk/json.h 00:03:01.765 TEST_HEADER include/spdk/jsonrpc.h 00:03:01.765 CC test/thread/poller_perf/poller_perf.o 00:03:01.765 TEST_HEADER include/spdk/keyring.h 00:03:01.765 TEST_HEADER include/spdk/keyring_module.h 00:03:01.765 TEST_HEADER include/spdk/likely.h 00:03:01.765 TEST_HEADER include/spdk/log.h 00:03:01.765 TEST_HEADER include/spdk/lvol.h 00:03:01.765 TEST_HEADER include/spdk/memory.h 00:03:01.765 TEST_HEADER include/spdk/mmio.h 00:03:01.765 TEST_HEADER include/spdk/nbd.h 00:03:01.765 TEST_HEADER include/spdk/notify.h 00:03:01.765 TEST_HEADER include/spdk/nvme.h 00:03:01.765 TEST_HEADER include/spdk/nvme_intel.h 00:03:01.765 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:01.765 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:01.765 TEST_HEADER include/spdk/nvme_spec.h 00:03:01.765 CC test/app/bdev_svc/bdev_svc.o 00:03:01.765 TEST_HEADER include/spdk/nvme_zns.h 00:03:01.765 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:01.765 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:01.765 TEST_HEADER include/spdk/nvmf.h 00:03:01.765 TEST_HEADER include/spdk/nvmf_spec.h 00:03:01.765 CC test/dma/test_dma/test_dma.o 00:03:01.765 TEST_HEADER include/spdk/nvmf_transport.h 00:03:01.765 TEST_HEADER include/spdk/opal.h 00:03:01.765 TEST_HEADER include/spdk/opal_spec.h 00:03:01.765 TEST_HEADER include/spdk/pci_ids.h 00:03:01.765 TEST_HEADER include/spdk/pipe.h 00:03:01.765 TEST_HEADER include/spdk/queue.h 00:03:01.765 TEST_HEADER include/spdk/reduce.h 00:03:01.765 TEST_HEADER include/spdk/rpc.h 00:03:01.765 TEST_HEADER include/spdk/scheduler.h 00:03:01.765 TEST_HEADER include/spdk/scsi.h 00:03:01.765 TEST_HEADER include/spdk/scsi_spec.h 00:03:01.765 TEST_HEADER include/spdk/sock.h 00:03:01.765 TEST_HEADER include/spdk/stdinc.h 00:03:01.765 CC test/env/mem_callbacks/mem_callbacks.o 00:03:01.765 TEST_HEADER include/spdk/string.h 00:03:01.765 TEST_HEADER include/spdk/thread.h 00:03:01.765 TEST_HEADER include/spdk/trace.h 00:03:01.765 TEST_HEADER include/spdk/trace_parser.h 00:03:01.765 TEST_HEADER include/spdk/tree.h 00:03:01.765 TEST_HEADER include/spdk/ublk.h 00:03:01.765 TEST_HEADER include/spdk/util.h 00:03:01.765 TEST_HEADER include/spdk/uuid.h 00:03:01.765 TEST_HEADER include/spdk/version.h 00:03:01.765 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:02.023 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:02.023 TEST_HEADER include/spdk/vhost.h 00:03:02.023 TEST_HEADER include/spdk/vmd.h 00:03:02.023 TEST_HEADER include/spdk/xor.h 00:03:02.023 TEST_HEADER include/spdk/zipf.h 00:03:02.023 CXX test/cpp_headers/accel.o 00:03:02.023 LINK poller_perf 00:03:02.023 LINK nvmf_tgt 00:03:02.023 LINK spdk_trace_record 00:03:02.023 LINK zipf 00:03:02.023 LINK bdev_svc 00:03:02.023 LINK ioat_perf 00:03:02.023 CXX test/cpp_headers/accel_module.o 00:03:02.023 LINK spdk_trace 00:03:02.023 CXX test/cpp_headers/assert.o 00:03:02.023 CXX test/cpp_headers/barrier.o 00:03:02.281 LINK test_dma 00:03:02.281 CXX test/cpp_headers/base64.o 00:03:02.281 CC test/env/vtophys/vtophys.o 00:03:02.281 CC examples/ioat/verify/verify.o 00:03:02.281 CC test/app/histogram_perf/histogram_perf.o 00:03:02.281 CC test/app/jsoncat/jsoncat.o 00:03:02.281 CC test/app/stub/stub.o 00:03:02.281 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:02.540 CC app/iscsi_tgt/iscsi_tgt.o 00:03:02.541 LINK vtophys 00:03:02.541 CXX test/cpp_headers/bdev.o 00:03:02.541 CC test/rpc_client/rpc_client_test.o 00:03:02.541 LINK jsoncat 00:03:02.541 LINK mem_callbacks 00:03:02.541 LINK histogram_perf 00:03:02.541 LINK verify 00:03:02.541 LINK stub 00:03:02.541 LINK iscsi_tgt 00:03:02.541 CXX test/cpp_headers/bdev_module.o 00:03:02.805 LINK rpc_client_test 00:03:02.805 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:02.805 CC test/env/memory/memory_ut.o 00:03:02.805 LINK nvme_fuzz 00:03:02.805 CXX test/cpp_headers/bdev_zone.o 00:03:02.805 CC test/blobfs/mkfs/mkfs.o 00:03:02.805 LINK env_dpdk_post_init 00:03:02.805 CC test/accel/dif/dif.o 00:03:02.805 CC test/event/event_perf/event_perf.o 00:03:03.062 CC test/env/pci/pci_ut.o 00:03:03.062 CC app/spdk_tgt/spdk_tgt.o 00:03:03.062 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:03.062 LINK event_perf 00:03:03.062 CXX test/cpp_headers/bit_array.o 00:03:03.062 LINK mkfs 00:03:03.062 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:03.319 CXX test/cpp_headers/bit_pool.o 00:03:03.319 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:03.319 CC test/event/reactor/reactor.o 00:03:03.319 LINK spdk_tgt 00:03:03.319 LINK pci_ut 00:03:03.319 LINK dif 00:03:03.576 CXX test/cpp_headers/blob_bdev.o 00:03:03.576 LINK reactor 00:03:03.576 CC test/lvol/esnap/esnap.o 00:03:03.576 CXX test/cpp_headers/blobfs_bdev.o 00:03:03.576 CC app/spdk_lspci/spdk_lspci.o 00:03:03.834 CC app/spdk_nvme_perf/perf.o 00:03:03.834 LINK vhost_fuzz 00:03:03.834 CC test/event/reactor_perf/reactor_perf.o 00:03:03.834 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:03.834 LINK spdk_lspci 00:03:03.834 LINK memory_ut 00:03:03.834 CXX test/cpp_headers/blobfs.o 00:03:04.092 LINK reactor_perf 00:03:04.092 LINK interrupt_tgt 00:03:04.092 CC test/event/app_repeat/app_repeat.o 00:03:04.092 CXX test/cpp_headers/blob.o 00:03:04.349 CC test/event/scheduler/scheduler.o 00:03:04.349 LINK app_repeat 00:03:04.349 CXX test/cpp_headers/conf.o 00:03:04.349 CC app/spdk_nvme_identify/identify.o 00:03:04.349 CC test/nvme/aer/aer.o 00:03:04.608 CC test/bdev/bdevio/bdevio.o 00:03:04.608 LINK scheduler 00:03:04.608 CXX test/cpp_headers/config.o 00:03:04.608 CXX test/cpp_headers/cpuset.o 00:03:04.608 CC app/spdk_nvme_discover/discovery_aer.o 00:03:04.608 LINK spdk_nvme_perf 00:03:04.866 CXX test/cpp_headers/crc16.o 00:03:04.866 LINK aer 00:03:04.866 CXX test/cpp_headers/crc32.o 00:03:04.866 LINK iscsi_fuzz 00:03:04.866 LINK bdevio 00:03:04.866 LINK spdk_nvme_discover 00:03:04.866 CC app/spdk_top/spdk_top.o 00:03:04.866 CXX test/cpp_headers/crc64.o 00:03:04.866 CXX test/cpp_headers/dif.o 00:03:05.123 CC test/nvme/reset/reset.o 00:03:05.123 CXX test/cpp_headers/dma.o 00:03:05.123 CXX test/cpp_headers/endian.o 00:03:05.123 LINK spdk_nvme_identify 00:03:05.123 CC test/nvme/sgl/sgl.o 00:03:05.382 CXX test/cpp_headers/env_dpdk.o 00:03:05.382 CC app/vhost/vhost.o 00:03:05.382 CC examples/thread/thread/thread_ex.o 00:03:05.382 LINK reset 00:03:05.639 LINK sgl 00:03:05.639 CXX test/cpp_headers/env.o 00:03:05.639 CC examples/sock/hello_world/hello_sock.o 00:03:05.898 LINK vhost 00:03:05.898 CC test/nvme/e2edp/nvme_dp.o 00:03:05.898 LINK thread 00:03:05.898 CXX test/cpp_headers/event.o 00:03:05.898 CC examples/vmd/lsvmd/lsvmd.o 00:03:05.898 LINK hello_sock 00:03:05.898 CC examples/vmd/led/led.o 00:03:05.898 LINK spdk_top 00:03:06.162 CXX test/cpp_headers/fd_group.o 00:03:06.162 LINK led 00:03:06.162 LINK lsvmd 00:03:06.162 CC app/spdk_dd/spdk_dd.o 00:03:06.162 CXX test/cpp_headers/fd.o 00:03:06.162 LINK nvme_dp 00:03:06.420 CC examples/idxd/perf/perf.o 00:03:06.420 CC app/fio/nvme/fio_plugin.o 00:03:06.420 CXX test/cpp_headers/file.o 00:03:06.420 CC app/fio/bdev/fio_plugin.o 00:03:06.678 CC test/nvme/overhead/overhead.o 00:03:06.678 CC examples/accel/perf/accel_perf.o 00:03:06.678 CXX test/cpp_headers/ftl.o 00:03:06.678 CC examples/blob/hello_world/hello_blob.o 00:03:06.678 LINK spdk_dd 00:03:06.678 LINK idxd_perf 00:03:06.936 CXX test/cpp_headers/gpt_spec.o 00:03:06.936 LINK overhead 00:03:07.194 LINK spdk_nvme 00:03:07.194 LINK spdk_bdev 00:03:07.194 CC test/nvme/err_injection/err_injection.o 00:03:07.194 LINK hello_blob 00:03:07.194 CXX test/cpp_headers/hexlify.o 00:03:07.194 CXX test/cpp_headers/histogram_data.o 00:03:07.194 CXX test/cpp_headers/idxd.o 00:03:07.194 LINK accel_perf 00:03:07.452 CXX test/cpp_headers/idxd_spec.o 00:03:07.452 CC examples/nvme/hello_world/hello_world.o 00:03:07.452 CXX test/cpp_headers/init.o 00:03:07.452 LINK err_injection 00:03:07.711 CC test/nvme/startup/startup.o 00:03:07.711 CXX test/cpp_headers/ioat.o 00:03:07.711 CC test/nvme/reserve/reserve.o 00:03:07.711 CC examples/blob/cli/blobcli.o 00:03:07.711 CC test/nvme/simple_copy/simple_copy.o 00:03:07.711 LINK hello_world 00:03:07.969 CC test/nvme/connect_stress/connect_stress.o 00:03:07.969 LINK startup 00:03:07.969 CXX test/cpp_headers/ioat_spec.o 00:03:07.969 CC examples/nvme/reconnect/reconnect.o 00:03:07.969 LINK reserve 00:03:07.969 LINK simple_copy 00:03:08.228 CC test/nvme/boot_partition/boot_partition.o 00:03:08.228 LINK connect_stress 00:03:08.228 CXX test/cpp_headers/iscsi_spec.o 00:03:08.228 CXX test/cpp_headers/json.o 00:03:08.228 CC test/nvme/compliance/nvme_compliance.o 00:03:08.228 LINK blobcli 00:03:08.228 CC test/nvme/fused_ordering/fused_ordering.o 00:03:08.487 LINK boot_partition 00:03:08.487 LINK reconnect 00:03:08.487 CXX test/cpp_headers/jsonrpc.o 00:03:08.487 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:08.745 CXX test/cpp_headers/keyring.o 00:03:08.745 LINK fused_ordering 00:03:08.745 LINK nvme_compliance 00:03:08.745 CC test/nvme/fdp/fdp.o 00:03:08.745 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:08.745 LINK doorbell_aers 00:03:08.745 CC examples/bdev/hello_world/hello_bdev.o 00:03:08.745 CC examples/bdev/bdevperf/bdevperf.o 00:03:08.745 CXX test/cpp_headers/keyring_module.o 00:03:09.004 CC test/nvme/cuse/cuse.o 00:03:09.004 CXX test/cpp_headers/likely.o 00:03:09.004 CC examples/nvme/arbitration/arbitration.o 00:03:09.004 CXX test/cpp_headers/log.o 00:03:09.004 LINK hello_bdev 00:03:09.004 LINK fdp 00:03:09.004 CXX test/cpp_headers/lvol.o 00:03:09.262 CXX test/cpp_headers/memory.o 00:03:09.262 LINK nvme_manage 00:03:09.262 CXX test/cpp_headers/mmio.o 00:03:09.262 CXX test/cpp_headers/nbd.o 00:03:09.262 CXX test/cpp_headers/notify.o 00:03:09.262 CXX test/cpp_headers/nvme.o 00:03:09.262 LINK arbitration 00:03:09.262 CXX test/cpp_headers/nvme_intel.o 00:03:09.262 CXX test/cpp_headers/nvme_ocssd.o 00:03:09.262 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:09.521 CC examples/nvme/hotplug/hotplug.o 00:03:09.521 CXX test/cpp_headers/nvme_spec.o 00:03:09.521 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:09.521 LINK bdevperf 00:03:09.521 CC examples/nvme/abort/abort.o 00:03:09.521 CXX test/cpp_headers/nvme_zns.o 00:03:09.521 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:09.778 LINK esnap 00:03:09.778 CXX test/cpp_headers/nvmf_cmd.o 00:03:09.778 LINK cmb_copy 00:03:09.778 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:09.778 LINK pmr_persistence 00:03:09.778 CXX test/cpp_headers/nvmf.o 00:03:09.778 LINK hotplug 00:03:10.036 CXX test/cpp_headers/nvmf_spec.o 00:03:10.036 LINK abort 00:03:10.036 CXX test/cpp_headers/nvmf_transport.o 00:03:10.036 CXX test/cpp_headers/opal.o 00:03:10.036 CXX test/cpp_headers/opal_spec.o 00:03:10.036 CXX test/cpp_headers/pci_ids.o 00:03:10.036 CXX test/cpp_headers/pipe.o 00:03:10.294 CXX test/cpp_headers/queue.o 00:03:10.294 CXX test/cpp_headers/reduce.o 00:03:10.294 CXX test/cpp_headers/rpc.o 00:03:10.294 CXX test/cpp_headers/scheduler.o 00:03:10.294 CXX test/cpp_headers/scsi.o 00:03:10.294 LINK cuse 00:03:10.294 CXX test/cpp_headers/scsi_spec.o 00:03:10.294 CXX test/cpp_headers/sock.o 00:03:10.294 CXX test/cpp_headers/stdinc.o 00:03:10.294 CXX test/cpp_headers/string.o 00:03:10.294 CXX test/cpp_headers/thread.o 00:03:10.552 CXX test/cpp_headers/trace.o 00:03:10.552 CC examples/nvmf/nvmf/nvmf.o 00:03:10.552 CXX test/cpp_headers/trace_parser.o 00:03:10.552 CXX test/cpp_headers/tree.o 00:03:10.552 CXX test/cpp_headers/ublk.o 00:03:10.552 CXX test/cpp_headers/util.o 00:03:10.552 CXX test/cpp_headers/uuid.o 00:03:10.552 CXX test/cpp_headers/version.o 00:03:10.552 CXX test/cpp_headers/vfio_user_pci.o 00:03:10.552 CXX test/cpp_headers/vfio_user_spec.o 00:03:10.552 CXX test/cpp_headers/vhost.o 00:03:10.552 CXX test/cpp_headers/vmd.o 00:03:10.552 CXX test/cpp_headers/xor.o 00:03:10.552 CXX test/cpp_headers/zipf.o 00:03:10.810 LINK nvmf 00:03:11.068 00:03:11.068 real 1m8.413s 00:03:11.068 user 7m8.058s 00:03:11.068 sys 1m45.637s 00:03:11.068 15:49:04 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:11.068 15:49:04 make -- common/autotest_common.sh@10 -- $ set +x 00:03:11.068 ************************************ 00:03:11.068 END TEST make 00:03:11.068 ************************************ 00:03:11.068 15:49:04 -- common/autotest_common.sh@1142 -- $ return 0 00:03:11.068 15:49:04 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:11.068 15:49:04 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:11.068 15:49:04 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:11.068 15:49:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.068 15:49:04 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:11.068 15:49:04 -- pm/common@44 -- $ pid=5295 00:03:11.068 15:49:04 -- pm/common@50 -- $ kill -TERM 5295 00:03:11.068 15:49:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.068 15:49:04 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:11.068 15:49:04 -- pm/common@44 -- $ pid=5297 00:03:11.068 15:49:04 -- pm/common@50 -- $ kill -TERM 5297 00:03:11.068 15:49:04 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:11.068 15:49:04 -- nvmf/common.sh@7 -- # uname -s 00:03:11.068 15:49:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:11.068 15:49:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:11.068 15:49:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:11.068 15:49:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:11.068 15:49:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:11.068 15:49:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:11.068 15:49:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:11.068 15:49:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:11.068 15:49:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:11.068 15:49:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:11.068 15:49:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:03:11.068 15:49:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:03:11.068 15:49:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:11.068 15:49:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:11.068 15:49:04 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:11.068 15:49:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:11.068 15:49:04 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:11.068 15:49:04 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:11.068 15:49:04 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:11.068 15:49:04 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:11.068 15:49:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.068 15:49:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.068 15:49:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.068 15:49:04 -- paths/export.sh@5 -- # export PATH 00:03:11.068 15:49:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.068 15:49:04 -- nvmf/common.sh@47 -- # : 0 00:03:11.068 15:49:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:11.068 15:49:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:11.068 15:49:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:11.068 15:49:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:11.068 15:49:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:11.068 15:49:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:11.069 15:49:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:11.069 15:49:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:11.069 15:49:04 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:11.069 15:49:04 -- spdk/autotest.sh@32 -- # uname -s 00:03:11.069 15:49:04 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:11.069 15:49:04 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:11.069 15:49:04 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:11.069 15:49:04 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:11.069 15:49:04 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:11.069 15:49:04 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:11.069 15:49:04 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:11.069 15:49:04 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:11.069 15:49:04 -- spdk/autotest.sh@48 -- # udevadm_pid=54703 00:03:11.069 15:49:04 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:11.069 15:49:04 -- pm/common@17 -- # local monitor 00:03:11.069 15:49:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.069 15:49:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.069 15:49:04 -- pm/common@25 -- # sleep 1 00:03:11.069 15:49:04 -- pm/common@21 -- # date +%s 00:03:11.069 15:49:04 -- pm/common@21 -- # date +%s 00:03:11.327 15:49:04 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:11.327 15:49:04 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721058544 00:03:11.327 15:49:04 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721058544 00:03:11.327 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721058544_collect-vmstat.pm.log 00:03:11.327 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721058544_collect-cpu-load.pm.log 00:03:12.259 15:49:05 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:12.259 15:49:05 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:12.259 15:49:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:12.259 15:49:05 -- common/autotest_common.sh@10 -- # set +x 00:03:12.259 15:49:05 -- spdk/autotest.sh@59 -- # create_test_list 00:03:12.259 15:49:05 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:12.259 15:49:05 -- common/autotest_common.sh@10 -- # set +x 00:03:12.259 15:49:05 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:12.259 15:49:05 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:12.259 15:49:05 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:12.259 15:49:05 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:12.259 15:49:05 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:12.259 15:49:05 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:12.259 15:49:05 -- common/autotest_common.sh@1455 -- # uname 00:03:12.259 15:49:05 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:12.259 15:49:05 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:12.259 15:49:05 -- common/autotest_common.sh@1475 -- # uname 00:03:12.259 15:49:05 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:12.259 15:49:05 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:12.259 15:49:05 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:12.259 15:49:05 -- spdk/autotest.sh@72 -- # hash lcov 00:03:12.259 15:49:05 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:12.259 15:49:05 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:12.259 --rc lcov_branch_coverage=1 00:03:12.259 --rc lcov_function_coverage=1 00:03:12.259 --rc genhtml_branch_coverage=1 00:03:12.259 --rc genhtml_function_coverage=1 00:03:12.259 --rc genhtml_legend=1 00:03:12.259 --rc geninfo_all_blocks=1 00:03:12.259 ' 00:03:12.259 15:49:05 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:12.259 --rc lcov_branch_coverage=1 00:03:12.259 --rc lcov_function_coverage=1 00:03:12.259 --rc genhtml_branch_coverage=1 00:03:12.259 --rc genhtml_function_coverage=1 00:03:12.259 --rc genhtml_legend=1 00:03:12.259 --rc geninfo_all_blocks=1 00:03:12.259 ' 00:03:12.259 15:49:05 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:12.259 --rc lcov_branch_coverage=1 00:03:12.259 --rc lcov_function_coverage=1 00:03:12.259 --rc genhtml_branch_coverage=1 00:03:12.259 --rc genhtml_function_coverage=1 00:03:12.259 --rc genhtml_legend=1 00:03:12.259 --rc geninfo_all_blocks=1 00:03:12.259 --no-external' 00:03:12.259 15:49:05 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:12.259 --rc lcov_branch_coverage=1 00:03:12.259 --rc lcov_function_coverage=1 00:03:12.259 --rc genhtml_branch_coverage=1 00:03:12.259 --rc genhtml_function_coverage=1 00:03:12.259 --rc genhtml_legend=1 00:03:12.259 --rc geninfo_all_blocks=1 00:03:12.259 --no-external' 00:03:12.259 15:49:05 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:12.259 lcov: LCOV version 1.14 00:03:12.259 15:49:05 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:30.333 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:30.333 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:42.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:42.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:42.598 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:42.598 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:42.598 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:42.598 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:42.598 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:42.598 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:42.598 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:42.598 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:42.598 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:42.598 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:42.598 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:42.598 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:42.598 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:42.598 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:42.598 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:42.598 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:42.598 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:42.598 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:42.598 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:42.598 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:42.598 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:42.598 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:42.598 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:42.598 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:42.598 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:42.598 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:42.598 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:42.598 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:42.598 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:42.598 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:42.598 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:42.598 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:42.598 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:42.598 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:42.598 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:42.598 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:42.598 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:42.598 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:42.598 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:42.598 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:42.598 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:42.598 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:42.598 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:42.598 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:42.598 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:42.598 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:42.598 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:42.598 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:42.598 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:42.598 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:42.598 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:42.598 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:42.598 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:46.781 15:49:39 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:46.781 15:49:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:46.781 15:49:39 -- common/autotest_common.sh@10 -- # set +x 00:03:46.781 15:49:39 -- spdk/autotest.sh@91 -- # rm -f 00:03:46.781 15:49:39 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:47.039 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:47.039 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:47.039 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:47.039 15:49:40 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:47.039 15:49:40 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:47.039 15:49:40 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:47.039 15:49:40 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:47.039 15:49:40 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:47.039 15:49:40 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:47.039 15:49:40 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:47.039 15:49:40 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:47.039 15:49:40 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:47.039 15:49:40 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:47.039 15:49:40 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:47.039 15:49:40 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:47.039 15:49:40 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:47.039 15:49:40 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:47.039 15:49:40 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:47.039 15:49:40 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:47.039 15:49:40 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:47.039 15:49:40 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:47.039 15:49:40 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:47.039 15:49:40 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:47.039 15:49:40 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:47.039 15:49:40 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:47.039 15:49:40 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:47.039 15:49:40 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:47.039 15:49:40 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:47.040 15:49:40 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:47.040 15:49:40 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:47.040 15:49:40 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:47.040 15:49:40 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:47.040 15:49:40 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:47.040 No valid GPT data, bailing 00:03:47.040 15:49:40 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:47.040 15:49:40 -- scripts/common.sh@391 -- # pt= 00:03:47.040 15:49:40 -- scripts/common.sh@392 -- # return 1 00:03:47.040 15:49:40 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:47.040 1+0 records in 00:03:47.040 1+0 records out 00:03:47.040 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00526349 s, 199 MB/s 00:03:47.040 15:49:40 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:47.040 15:49:40 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:47.040 15:49:40 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:47.040 15:49:40 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:47.040 15:49:40 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:47.040 No valid GPT data, bailing 00:03:47.040 15:49:40 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:47.040 15:49:40 -- scripts/common.sh@391 -- # pt= 00:03:47.040 15:49:40 -- scripts/common.sh@392 -- # return 1 00:03:47.298 15:49:40 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:47.298 1+0 records in 00:03:47.298 1+0 records out 00:03:47.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00396613 s, 264 MB/s 00:03:47.298 15:49:40 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:47.298 15:49:40 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:47.298 15:49:40 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:03:47.298 15:49:40 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:03:47.298 15:49:40 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:47.298 No valid GPT data, bailing 00:03:47.298 15:49:40 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:47.298 15:49:40 -- scripts/common.sh@391 -- # pt= 00:03:47.298 15:49:40 -- scripts/common.sh@392 -- # return 1 00:03:47.298 15:49:40 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:47.298 1+0 records in 00:03:47.298 1+0 records out 00:03:47.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0041025 s, 256 MB/s 00:03:47.298 15:49:40 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:47.298 15:49:40 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:47.298 15:49:40 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:03:47.298 15:49:40 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:03:47.298 15:49:40 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:47.298 No valid GPT data, bailing 00:03:47.298 15:49:40 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:47.298 15:49:40 -- scripts/common.sh@391 -- # pt= 00:03:47.298 15:49:40 -- scripts/common.sh@392 -- # return 1 00:03:47.298 15:49:40 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:47.298 1+0 records in 00:03:47.298 1+0 records out 00:03:47.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00436415 s, 240 MB/s 00:03:47.298 15:49:40 -- spdk/autotest.sh@118 -- # sync 00:03:47.298 15:49:40 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:47.298 15:49:40 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:47.298 15:49:40 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:49.199 15:49:42 -- spdk/autotest.sh@124 -- # uname -s 00:03:49.199 15:49:42 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:49.199 15:49:42 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:49.199 15:49:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:49.199 15:49:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:49.199 15:49:42 -- common/autotest_common.sh@10 -- # set +x 00:03:49.199 ************************************ 00:03:49.199 START TEST setup.sh 00:03:49.199 ************************************ 00:03:49.199 15:49:42 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:49.199 * Looking for test storage... 00:03:49.199 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:49.199 15:49:42 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:49.199 15:49:42 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:49.199 15:49:42 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:49.199 15:49:42 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:49.199 15:49:42 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:49.199 15:49:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:49.199 ************************************ 00:03:49.199 START TEST acl 00:03:49.199 ************************************ 00:03:49.199 15:49:42 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:49.199 * Looking for test storage... 00:03:49.199 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:49.199 15:49:42 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:49.199 15:49:42 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:49.199 15:49:42 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:49.199 15:49:42 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:49.199 15:49:42 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:49.199 15:49:42 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:49.199 15:49:42 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:49.199 15:49:42 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:49.199 15:49:42 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:49.199 15:49:42 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:49.199 15:49:42 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:49.199 15:49:42 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:49.199 15:49:42 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:49.199 15:49:42 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:49.199 15:49:42 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:49.199 15:49:42 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:49.199 15:49:42 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:49.199 15:49:42 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:49.199 15:49:42 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:49.199 15:49:42 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:49.199 15:49:42 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:49.199 15:49:42 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:49.199 15:49:42 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:49.199 15:49:42 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:49.199 15:49:42 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:49.199 15:49:42 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:49.199 15:49:42 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:49.199 15:49:42 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:49.200 15:49:42 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:49.200 15:49:42 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:49.200 15:49:42 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:49.780 15:49:43 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:49.780 15:49:43 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:49.780 15:49:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.780 15:49:43 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:49.780 15:49:43 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.780 15:49:43 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:50.369 15:49:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:03:50.369 15:49:44 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:50.369 15:49:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.369 Hugepages 00:03:50.369 node hugesize free / total 00:03:50.369 15:49:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:50.369 15:49:44 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:50.369 15:49:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.369 00:03:50.369 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:50.369 15:49:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:50.369 15:49:44 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:50.369 15:49:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.627 15:49:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:50.627 15:49:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:50.627 15:49:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.627 15:49:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.627 15:49:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:03:50.627 15:49:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:50.627 15:49:44 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:50.627 15:49:44 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:50.627 15:49:44 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:50.627 15:49:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.627 15:49:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:03:50.627 15:49:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:50.627 15:49:44 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:50.627 15:49:44 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:50.627 15:49:44 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:50.627 15:49:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.627 15:49:44 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:03:50.627 15:49:44 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:50.627 15:49:44 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:50.627 15:49:44 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:50.627 15:49:44 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:50.627 ************************************ 00:03:50.627 START TEST denied 00:03:50.627 ************************************ 00:03:50.627 15:49:44 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:50.627 15:49:44 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:03:50.627 15:49:44 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:50.627 15:49:44 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:03:50.627 15:49:44 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.627 15:49:44 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:51.558 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:03:51.558 15:49:45 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:03:51.558 15:49:45 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:51.558 15:49:45 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:51.558 15:49:45 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:03:51.558 15:49:45 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:03:51.558 15:49:45 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:51.558 15:49:45 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:51.558 15:49:45 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:51.558 15:49:45 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:51.558 15:49:45 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:52.120 00:03:52.121 real 0m1.375s 00:03:52.121 user 0m0.509s 00:03:52.121 sys 0m0.801s 00:03:52.121 15:49:45 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:52.121 ************************************ 00:03:52.121 END TEST denied 00:03:52.121 ************************************ 00:03:52.121 15:49:45 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:52.121 15:49:45 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:52.121 15:49:45 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:52.121 15:49:45 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:52.121 15:49:45 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.121 15:49:45 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:52.121 ************************************ 00:03:52.121 START TEST allowed 00:03:52.121 ************************************ 00:03:52.121 15:49:45 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:52.121 15:49:45 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:03:52.121 15:49:45 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:52.121 15:49:45 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.121 15:49:45 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:52.121 15:49:45 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:03:53.051 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:53.051 15:49:46 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:03:53.051 15:49:46 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:53.051 15:49:46 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:53.051 15:49:46 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:03:53.051 15:49:46 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:03:53.051 15:49:46 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:53.051 15:49:46 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:53.051 15:49:46 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:53.051 15:49:46 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:53.051 15:49:46 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:53.616 00:03:53.616 real 0m1.545s 00:03:53.616 user 0m0.669s 00:03:53.616 sys 0m0.854s 00:03:53.616 15:49:47 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:53.616 15:49:47 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:53.616 ************************************ 00:03:53.616 END TEST allowed 00:03:53.616 ************************************ 00:03:53.616 15:49:47 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:53.616 ************************************ 00:03:53.616 END TEST acl 00:03:53.616 ************************************ 00:03:53.616 00:03:53.616 real 0m4.661s 00:03:53.616 user 0m2.031s 00:03:53.616 sys 0m2.551s 00:03:53.616 15:49:47 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:53.616 15:49:47 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:53.875 15:49:47 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:53.875 15:49:47 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:53.875 15:49:47 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:53.875 15:49:47 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.875 15:49:47 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:53.875 ************************************ 00:03:53.875 START TEST hugepages 00:03:53.875 ************************************ 00:03:53.875 15:49:47 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:53.875 * Looking for test storage... 00:03:53.875 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 5866332 kB' 'MemAvailable: 7376732 kB' 'Buffers: 2436 kB' 'Cached: 1721788 kB' 'SwapCached: 0 kB' 'Active: 478472 kB' 'Inactive: 1351536 kB' 'Active(anon): 116272 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351536 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 316 kB' 'Writeback: 0 kB' 'AnonPages: 107368 kB' 'Mapped: 48764 kB' 'Shmem: 10488 kB' 'KReclaimable: 67204 kB' 'Slab: 140556 kB' 'SReclaimable: 67204 kB' 'SUnreclaim: 73352 kB' 'KernelStack: 6348 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 347420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.875 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.876 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.877 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.877 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.877 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.877 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.877 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.877 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.877 15:49:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.877 15:49:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.877 15:49:47 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:53.877 15:49:47 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:53.877 15:49:47 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:53.877 15:49:47 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:53.877 15:49:47 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:53.877 15:49:47 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:53.877 15:49:47 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:53.877 15:49:47 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:53.877 15:49:47 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:53.877 15:49:47 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:53.877 15:49:47 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:53.877 15:49:47 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.877 15:49:47 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:53.877 15:49:47 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:53.877 15:49:47 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:53.877 15:49:47 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:53.877 15:49:47 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:53.877 15:49:47 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:53.877 15:49:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:53.877 15:49:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:53.877 15:49:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:53.877 15:49:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:53.877 15:49:47 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:53.877 15:49:47 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:53.877 15:49:47 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:53.877 15:49:47 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:53.877 15:49:47 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.877 15:49:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:53.877 ************************************ 00:03:53.877 START TEST default_setup 00:03:53.877 ************************************ 00:03:53.877 15:49:47 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:53.877 15:49:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:53.877 15:49:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:53.877 15:49:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:53.877 15:49:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:53.877 15:49:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:53.877 15:49:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:53.877 15:49:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:53.877 15:49:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:53.877 15:49:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:53.877 15:49:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:53.877 15:49:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:53.877 15:49:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:53.877 15:49:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:53.877 15:49:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:53.877 15:49:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:53.877 15:49:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:53.877 15:49:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:53.877 15:49:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:53.877 15:49:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:53.877 15:49:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:53.877 15:49:47 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.877 15:49:47 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:54.448 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:54.707 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:54.707 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:54.707 15:49:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:54.707 15:49:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:54.707 15:49:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:54.707 15:49:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:54.707 15:49:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7969660 kB' 'MemAvailable: 9479872 kB' 'Buffers: 2436 kB' 'Cached: 1721776 kB' 'SwapCached: 0 kB' 'Active: 494980 kB' 'Inactive: 1351548 kB' 'Active(anon): 132780 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123940 kB' 'Mapped: 48880 kB' 'Shmem: 10464 kB' 'KReclaimable: 66808 kB' 'Slab: 140092 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73284 kB' 'KernelStack: 6336 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 364420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.708 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7969160 kB' 'MemAvailable: 9479372 kB' 'Buffers: 2436 kB' 'Cached: 1721776 kB' 'SwapCached: 0 kB' 'Active: 495068 kB' 'Inactive: 1351548 kB' 'Active(anon): 132868 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 124044 kB' 'Mapped: 48768 kB' 'Shmem: 10464 kB' 'KReclaimable: 66808 kB' 'Slab: 140092 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73284 kB' 'KernelStack: 6352 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 364420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.709 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.710 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.710 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.710 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.710 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.710 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.710 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.710 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.710 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.710 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.710 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.710 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.710 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.710 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.710 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.710 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.710 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.710 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.710 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.710 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.710 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.710 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.710 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.710 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.710 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.710 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.710 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.710 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.710 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.710 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.972 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7969160 kB' 'MemAvailable: 9479372 kB' 'Buffers: 2436 kB' 'Cached: 1721776 kB' 'SwapCached: 0 kB' 'Active: 494888 kB' 'Inactive: 1351548 kB' 'Active(anon): 132688 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123824 kB' 'Mapped: 48768 kB' 'Shmem: 10464 kB' 'KReclaimable: 66808 kB' 'Slab: 140092 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73284 kB' 'KernelStack: 6336 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 364420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.973 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.974 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:54.975 nr_hugepages=1024 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:54.975 resv_hugepages=0 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:54.975 surplus_hugepages=0 00:03:54.975 anon_hugepages=0 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7969160 kB' 'MemAvailable: 9479372 kB' 'Buffers: 2436 kB' 'Cached: 1721776 kB' 'SwapCached: 0 kB' 'Active: 494872 kB' 'Inactive: 1351548 kB' 'Active(anon): 132672 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123808 kB' 'Mapped: 48768 kB' 'Shmem: 10464 kB' 'KReclaimable: 66808 kB' 'Slab: 140092 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73284 kB' 'KernelStack: 6336 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 364420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.975 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.976 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7969160 kB' 'MemUsed: 4272816 kB' 'SwapCached: 0 kB' 'Active: 494864 kB' 'Inactive: 1351548 kB' 'Active(anon): 132664 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 1724212 kB' 'Mapped: 48768 kB' 'AnonPages: 123828 kB' 'Shmem: 10464 kB' 'KernelStack: 6336 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66808 kB' 'Slab: 140084 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73276 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.977 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.978 node0=1024 expecting 1024 00:03:54.978 ************************************ 00:03:54.978 END TEST default_setup 00:03:54.978 ************************************ 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:54.978 00:03:54.978 real 0m1.033s 00:03:54.978 user 0m0.462s 00:03:54.978 sys 0m0.468s 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:54.978 15:49:48 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:54.978 15:49:48 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:54.978 15:49:48 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:54.978 15:49:48 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:54.978 15:49:48 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:54.978 15:49:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:54.978 ************************************ 00:03:54.978 START TEST per_node_1G_alloc 00:03:54.978 ************************************ 00:03:54.978 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:54.978 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:54.978 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:54.978 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:54.978 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:54.978 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:54.978 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:54.978 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:54.978 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:54.978 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:54.978 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:54.978 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:54.978 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:54.978 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:54.978 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:54.978 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:54.978 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:54.978 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:54.978 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:54.978 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:54.978 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:54.978 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:54.978 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:54.978 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:54.978 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.978 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:55.237 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:55.237 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:55.237 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:55.237 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:55.237 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:55.237 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:55.237 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:55.237 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:55.237 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:55.237 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:55.237 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:55.237 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:55.237 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:55.237 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:55.237 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:55.237 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.237 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.237 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.237 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.237 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.237 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.237 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.501 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.501 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9016336 kB' 'MemAvailable: 10526548 kB' 'Buffers: 2436 kB' 'Cached: 1721776 kB' 'SwapCached: 0 kB' 'Active: 495472 kB' 'Inactive: 1351548 kB' 'Active(anon): 133272 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 124396 kB' 'Mapped: 48920 kB' 'Shmem: 10464 kB' 'KReclaimable: 66808 kB' 'Slab: 140144 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73336 kB' 'KernelStack: 6340 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 364420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.502 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.503 15:49:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9016084 kB' 'MemAvailable: 10526296 kB' 'Buffers: 2436 kB' 'Cached: 1721776 kB' 'SwapCached: 0 kB' 'Active: 494800 kB' 'Inactive: 1351548 kB' 'Active(anon): 132600 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123972 kB' 'Mapped: 48760 kB' 'Shmem: 10464 kB' 'KReclaimable: 66808 kB' 'Slab: 140152 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73344 kB' 'KernelStack: 6304 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 364420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.503 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.504 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9016084 kB' 'MemAvailable: 10526296 kB' 'Buffers: 2436 kB' 'Cached: 1721776 kB' 'SwapCached: 0 kB' 'Active: 494752 kB' 'Inactive: 1351548 kB' 'Active(anon): 132552 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123924 kB' 'Mapped: 48768 kB' 'Shmem: 10464 kB' 'KReclaimable: 66808 kB' 'Slab: 140184 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73376 kB' 'KernelStack: 6336 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 364420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.505 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.506 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.507 nr_hugepages=512 00:03:55.507 resv_hugepages=0 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:55.507 surplus_hugepages=0 00:03:55.507 anon_hugepages=0 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9016084 kB' 'MemAvailable: 10526296 kB' 'Buffers: 2436 kB' 'Cached: 1721776 kB' 'SwapCached: 0 kB' 'Active: 494704 kB' 'Inactive: 1351548 kB' 'Active(anon): 132504 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123880 kB' 'Mapped: 48768 kB' 'Shmem: 10464 kB' 'KReclaimable: 66808 kB' 'Slab: 140184 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73376 kB' 'KernelStack: 6336 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 364420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.507 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.508 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9016084 kB' 'MemUsed: 3225892 kB' 'SwapCached: 0 kB' 'Active: 494952 kB' 'Inactive: 1351548 kB' 'Active(anon): 132752 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 1724212 kB' 'Mapped: 48768 kB' 'AnonPages: 123944 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66808 kB' 'Slab: 140164 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73356 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.509 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.510 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.511 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.511 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.511 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.511 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.511 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.511 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.511 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.511 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.511 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.511 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.511 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.511 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.511 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.511 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.511 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.511 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.511 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.511 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.511 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.511 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.511 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.511 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.511 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.511 node0=512 expecting 512 00:03:55.511 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:55.511 ************************************ 00:03:55.511 END TEST per_node_1G_alloc 00:03:55.511 ************************************ 00:03:55.511 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:55.511 00:03:55.511 real 0m0.562s 00:03:55.511 user 0m0.277s 00:03:55.511 sys 0m0.284s 00:03:55.511 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.511 15:49:49 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:55.511 15:49:49 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:55.511 15:49:49 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:55.511 15:49:49 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.511 15:49:49 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.511 15:49:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:55.511 ************************************ 00:03:55.511 START TEST even_2G_alloc 00:03:55.511 ************************************ 00:03:55.511 15:49:49 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:55.511 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:55.511 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:55.511 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:55.511 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:55.511 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:55.511 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:55.511 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:55.511 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.511 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:55.511 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:55.511 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.511 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.511 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:55.511 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:55.511 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.511 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:55.511 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:55.511 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:55.511 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.511 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:55.511 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:55.511 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:55.511 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.511 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:56.087 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:56.087 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:56.087 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7962748 kB' 'MemAvailable: 9472964 kB' 'Buffers: 2436 kB' 'Cached: 1721780 kB' 'SwapCached: 0 kB' 'Active: 495332 kB' 'Inactive: 1351552 kB' 'Active(anon): 133132 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 124336 kB' 'Mapped: 48800 kB' 'Shmem: 10464 kB' 'KReclaimable: 66808 kB' 'Slab: 140180 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73372 kB' 'KernelStack: 6400 kB' 'PageTables: 4556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 364420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.087 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7962748 kB' 'MemAvailable: 9472964 kB' 'Buffers: 2436 kB' 'Cached: 1721780 kB' 'SwapCached: 0 kB' 'Active: 494776 kB' 'Inactive: 1351552 kB' 'Active(anon): 132576 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 123956 kB' 'Mapped: 48764 kB' 'Shmem: 10464 kB' 'KReclaimable: 66808 kB' 'Slab: 140180 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73372 kB' 'KernelStack: 6352 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 364420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.088 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.089 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7962748 kB' 'MemAvailable: 9472960 kB' 'Buffers: 2436 kB' 'Cached: 1721776 kB' 'SwapCached: 0 kB' 'Active: 495252 kB' 'Inactive: 1351548 kB' 'Active(anon): 133052 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 123976 kB' 'Mapped: 48764 kB' 'Shmem: 10464 kB' 'KReclaimable: 66808 kB' 'Slab: 140164 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73356 kB' 'KernelStack: 6352 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 364420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.090 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.091 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.092 nr_hugepages=1024 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:56.092 resv_hugepages=0 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:56.092 surplus_hugepages=0 00:03:56.092 anon_hugepages=0 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7962748 kB' 'MemAvailable: 9472964 kB' 'Buffers: 2436 kB' 'Cached: 1721780 kB' 'SwapCached: 0 kB' 'Active: 494936 kB' 'Inactive: 1351552 kB' 'Active(anon): 132736 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 123764 kB' 'Mapped: 48764 kB' 'Shmem: 10464 kB' 'KReclaimable: 66808 kB' 'Slab: 140172 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73364 kB' 'KernelStack: 6368 kB' 'PageTables: 4464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 364420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.092 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.093 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7962748 kB' 'MemUsed: 4279228 kB' 'SwapCached: 0 kB' 'Active: 494972 kB' 'Inactive: 1351552 kB' 'Active(anon): 132772 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'FilePages: 1724216 kB' 'Mapped: 48764 kB' 'AnonPages: 123988 kB' 'Shmem: 10464 kB' 'KernelStack: 6320 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66808 kB' 'Slab: 140148 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73340 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.094 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.095 node0=1024 expecting 1024 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:56.095 00:03:56.095 real 0m0.548s 00:03:56.095 user 0m0.273s 00:03:56.095 sys 0m0.289s 00:03:56.095 ************************************ 00:03:56.095 END TEST even_2G_alloc 00:03:56.095 ************************************ 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:56.095 15:49:49 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:56.095 15:49:49 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:56.095 15:49:49 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:56.095 15:49:49 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.095 15:49:49 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.095 15:49:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:56.095 ************************************ 00:03:56.095 START TEST odd_alloc 00:03:56.095 ************************************ 00:03:56.095 15:49:49 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:56.095 15:49:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:56.095 15:49:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:56.095 15:49:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:56.095 15:49:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:56.095 15:49:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:56.095 15:49:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:56.095 15:49:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:56.095 15:49:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:56.095 15:49:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:56.095 15:49:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:56.095 15:49:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:56.095 15:49:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:56.095 15:49:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:56.095 15:49:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:56.095 15:49:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.095 15:49:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:03:56.095 15:49:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:56.095 15:49:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:56.095 15:49:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.095 15:49:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:56.095 15:49:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:56.095 15:49:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:56.096 15:49:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.096 15:49:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:56.665 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:56.665 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:56.665 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:56.665 15:49:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:56.665 15:49:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:56.665 15:49:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:56.665 15:49:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:56.665 15:49:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:56.665 15:49:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:56.665 15:49:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:56.665 15:49:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:56.665 15:49:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:56.665 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:56.665 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:56.665 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:56.665 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.665 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.665 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.665 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.665 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.665 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.665 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.665 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7965736 kB' 'MemAvailable: 9475952 kB' 'Buffers: 2436 kB' 'Cached: 1721780 kB' 'SwapCached: 0 kB' 'Active: 495340 kB' 'Inactive: 1351552 kB' 'Active(anon): 133140 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 124244 kB' 'Mapped: 48868 kB' 'Shmem: 10464 kB' 'KReclaimable: 66808 kB' 'Slab: 140116 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73308 kB' 'KernelStack: 6292 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 364420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.666 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.667 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7965736 kB' 'MemAvailable: 9475952 kB' 'Buffers: 2436 kB' 'Cached: 1721780 kB' 'SwapCached: 0 kB' 'Active: 494944 kB' 'Inactive: 1351552 kB' 'Active(anon): 132744 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123856 kB' 'Mapped: 48764 kB' 'Shmem: 10464 kB' 'KReclaimable: 66808 kB' 'Slab: 140156 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73348 kB' 'KernelStack: 6336 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 364420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.668 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.669 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7965736 kB' 'MemAvailable: 9475952 kB' 'Buffers: 2436 kB' 'Cached: 1721780 kB' 'SwapCached: 0 kB' 'Active: 494860 kB' 'Inactive: 1351552 kB' 'Active(anon): 132660 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123820 kB' 'Mapped: 48764 kB' 'Shmem: 10464 kB' 'KReclaimable: 66808 kB' 'Slab: 140140 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73332 kB' 'KernelStack: 6336 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 364052 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.670 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.671 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.672 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:56.673 nr_hugepages=1025 00:03:56.673 resv_hugepages=0 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:56.673 surplus_hugepages=0 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:56.673 anon_hugepages=0 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7965736 kB' 'MemAvailable: 9475952 kB' 'Buffers: 2436 kB' 'Cached: 1721780 kB' 'SwapCached: 0 kB' 'Active: 495152 kB' 'Inactive: 1351552 kB' 'Active(anon): 132952 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 124128 kB' 'Mapped: 48764 kB' 'Shmem: 10464 kB' 'KReclaimable: 66808 kB' 'Slab: 140120 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73312 kB' 'KernelStack: 6320 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 364420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.673 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.674 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.675 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7965736 kB' 'MemUsed: 4276240 kB' 'SwapCached: 0 kB' 'Active: 495020 kB' 'Inactive: 1351552 kB' 'Active(anon): 132820 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 1724216 kB' 'Mapped: 48764 kB' 'AnonPages: 123968 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66808 kB' 'Slab: 140120 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73312 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.676 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:56.677 node0=1025 expecting 1025 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:03:56.677 ************************************ 00:03:56.677 END TEST odd_alloc 00:03:56.677 ************************************ 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:03:56.677 00:03:56.677 real 0m0.561s 00:03:56.677 user 0m0.283s 00:03:56.677 sys 0m0.284s 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:56.677 15:49:50 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:56.936 15:49:50 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:56.936 15:49:50 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:56.936 15:49:50 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.936 15:49:50 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.936 15:49:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:56.936 ************************************ 00:03:56.936 START TEST custom_alloc 00:03:56.936 ************************************ 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.936 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:57.198 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:57.198 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:57.198 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9019236 kB' 'MemAvailable: 10529452 kB' 'Buffers: 2436 kB' 'Cached: 1721780 kB' 'SwapCached: 0 kB' 'Active: 494936 kB' 'Inactive: 1351552 kB' 'Active(anon): 132736 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123868 kB' 'Mapped: 49020 kB' 'Shmem: 10464 kB' 'KReclaimable: 66808 kB' 'Slab: 140064 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73256 kB' 'KernelStack: 6324 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 364420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.198 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9019236 kB' 'MemAvailable: 10529452 kB' 'Buffers: 2436 kB' 'Cached: 1721780 kB' 'SwapCached: 0 kB' 'Active: 494752 kB' 'Inactive: 1351552 kB' 'Active(anon): 132552 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123952 kB' 'Mapped: 48764 kB' 'Shmem: 10464 kB' 'KReclaimable: 66808 kB' 'Slab: 140100 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73292 kB' 'KernelStack: 6352 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 364420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.199 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.200 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9019236 kB' 'MemAvailable: 10529452 kB' 'Buffers: 2436 kB' 'Cached: 1721780 kB' 'SwapCached: 0 kB' 'Active: 494760 kB' 'Inactive: 1351552 kB' 'Active(anon): 132560 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123952 kB' 'Mapped: 48764 kB' 'Shmem: 10464 kB' 'KReclaimable: 66808 kB' 'Slab: 140096 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73288 kB' 'KernelStack: 6352 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 364420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.201 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.202 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.203 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.203 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.203 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.203 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.203 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.203 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.203 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.203 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:57.463 nr_hugepages=512 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:57.463 resv_hugepages=0 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:57.463 surplus_hugepages=0 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:57.463 anon_hugepages=0 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.463 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9019616 kB' 'MemAvailable: 10529832 kB' 'Buffers: 2436 kB' 'Cached: 1721780 kB' 'SwapCached: 0 kB' 'Active: 494960 kB' 'Inactive: 1351552 kB' 'Active(anon): 132760 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123964 kB' 'Mapped: 48764 kB' 'Shmem: 10464 kB' 'KReclaimable: 66808 kB' 'Slab: 140092 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73284 kB' 'KernelStack: 6352 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 364420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.464 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9019616 kB' 'MemUsed: 3222360 kB' 'SwapCached: 0 kB' 'Active: 494732 kB' 'Inactive: 1351552 kB' 'Active(anon): 132532 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 1724216 kB' 'Mapped: 48764 kB' 'AnonPages: 123864 kB' 'Shmem: 10464 kB' 'KernelStack: 6336 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66808 kB' 'Slab: 140096 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73288 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.465 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 15:49:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.466 15:49:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.467 15:49:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.467 15:49:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.467 15:49:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:57.467 node0=512 expecting 512 00:03:57.467 15:49:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:57.467 00:03:57.467 real 0m0.601s 00:03:57.467 user 0m0.285s 00:03:57.467 sys 0m0.308s 00:03:57.467 ************************************ 00:03:57.467 END TEST custom_alloc 00:03:57.467 ************************************ 00:03:57.467 15:49:51 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.467 15:49:51 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:57.467 15:49:51 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:57.467 15:49:51 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:57.467 15:49:51 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.467 15:49:51 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.467 15:49:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:57.467 ************************************ 00:03:57.467 START TEST no_shrink_alloc 00:03:57.467 ************************************ 00:03:57.467 15:49:51 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:57.467 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:57.467 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:57.467 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:57.467 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:57.467 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:57.467 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:57.467 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:57.467 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:57.467 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:57.467 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:57.467 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:57.467 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:57.467 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:57.467 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:57.467 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:57.467 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:57.467 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:57.467 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:57.467 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:57.467 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:57.467 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.467 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:57.725 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:57.725 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:57.725 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:57.725 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:57.725 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:57.725 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:57.725 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:57.725 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:57.725 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:57.725 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:57.725 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:57.725 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:57.725 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:57.725 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:57.725 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:57.725 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.725 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.725 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.725 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.725 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.725 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.725 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.726 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7972656 kB' 'MemAvailable: 9482872 kB' 'Buffers: 2436 kB' 'Cached: 1721780 kB' 'SwapCached: 0 kB' 'Active: 495184 kB' 'Inactive: 1351552 kB' 'Active(anon): 132984 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 124340 kB' 'Mapped: 48896 kB' 'Shmem: 10464 kB' 'KReclaimable: 66808 kB' 'Slab: 140256 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73448 kB' 'KernelStack: 6324 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 364420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:03:57.726 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.726 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.726 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.726 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.726 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.726 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.726 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.989 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.989 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.989 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.989 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.989 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.989 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.989 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.989 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.989 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.989 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.989 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.989 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.989 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.989 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.989 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.989 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.989 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.989 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.989 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.989 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.990 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7972404 kB' 'MemAvailable: 9482620 kB' 'Buffers: 2436 kB' 'Cached: 1721780 kB' 'SwapCached: 0 kB' 'Active: 495032 kB' 'Inactive: 1351552 kB' 'Active(anon): 132832 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123948 kB' 'Mapped: 48764 kB' 'Shmem: 10464 kB' 'KReclaimable: 66808 kB' 'Slab: 140292 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73484 kB' 'KernelStack: 6336 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 364420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.991 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.992 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7972404 kB' 'MemAvailable: 9482620 kB' 'Buffers: 2436 kB' 'Cached: 1721780 kB' 'SwapCached: 0 kB' 'Active: 495096 kB' 'Inactive: 1351552 kB' 'Active(anon): 132896 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 124012 kB' 'Mapped: 48764 kB' 'Shmem: 10464 kB' 'KReclaimable: 66808 kB' 'Slab: 140292 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73484 kB' 'KernelStack: 6352 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 364420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.993 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:57.994 nr_hugepages=1024 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:57.994 resv_hugepages=0 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:57.994 surplus_hugepages=0 00:03:57.994 anon_hugepages=0 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.994 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7972404 kB' 'MemAvailable: 9482620 kB' 'Buffers: 2436 kB' 'Cached: 1721780 kB' 'SwapCached: 0 kB' 'Active: 495028 kB' 'Inactive: 1351552 kB' 'Active(anon): 132828 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 124012 kB' 'Mapped: 48764 kB' 'Shmem: 10464 kB' 'KReclaimable: 66808 kB' 'Slab: 140284 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73476 kB' 'KernelStack: 6352 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 364420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.995 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7973108 kB' 'MemUsed: 4268868 kB' 'SwapCached: 0 kB' 'Active: 495056 kB' 'Inactive: 1351552 kB' 'Active(anon): 132856 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'FilePages: 1724216 kB' 'Mapped: 48764 kB' 'AnonPages: 124016 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66808 kB' 'Slab: 140284 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73476 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.996 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.997 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.998 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.998 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.998 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.998 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:57.998 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.998 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.998 node0=1024 expecting 1024 00:03:57.998 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.998 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.998 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:57.998 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:57.998 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:57.998 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:57.998 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:57.998 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.998 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:58.255 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:58.255 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:58.255 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:58.518 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7971928 kB' 'MemAvailable: 9482144 kB' 'Buffers: 2436 kB' 'Cached: 1721780 kB' 'SwapCached: 0 kB' 'Active: 491012 kB' 'Inactive: 1351552 kB' 'Active(anon): 128812 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 119984 kB' 'Mapped: 48332 kB' 'Shmem: 10464 kB' 'KReclaimable: 66808 kB' 'Slab: 140312 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73504 kB' 'KernelStack: 6296 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.518 15:49:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.518 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.518 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.518 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.518 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.518 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.518 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.518 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.518 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.518 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.518 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.518 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.518 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.518 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.518 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.518 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.518 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.518 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.518 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.518 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.518 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.518 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.518 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.518 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.518 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.518 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.518 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.518 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.518 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.518 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.518 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.518 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.518 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.518 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.518 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.518 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.518 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.518 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.518 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7972080 kB' 'MemAvailable: 9482296 kB' 'Buffers: 2436 kB' 'Cached: 1721780 kB' 'SwapCached: 0 kB' 'Active: 490532 kB' 'Inactive: 1351552 kB' 'Active(anon): 128332 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 119180 kB' 'Mapped: 48148 kB' 'Shmem: 10464 kB' 'KReclaimable: 66808 kB' 'Slab: 140212 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73404 kB' 'KernelStack: 6236 kB' 'PageTables: 3836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.519 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.520 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7972080 kB' 'MemAvailable: 9482296 kB' 'Buffers: 2436 kB' 'Cached: 1721780 kB' 'SwapCached: 0 kB' 'Active: 490388 kB' 'Inactive: 1351552 kB' 'Active(anon): 128188 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 119304 kB' 'Mapped: 48020 kB' 'Shmem: 10464 kB' 'KReclaimable: 66808 kB' 'Slab: 140192 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73384 kB' 'KernelStack: 6248 kB' 'PageTables: 4008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.521 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.522 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:58.523 nr_hugepages=1024 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:58.523 resv_hugepages=0 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:58.523 surplus_hugepages=0 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:58.523 anon_hugepages=0 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7972080 kB' 'MemAvailable: 9482296 kB' 'Buffers: 2436 kB' 'Cached: 1721780 kB' 'SwapCached: 0 kB' 'Active: 490108 kB' 'Inactive: 1351552 kB' 'Active(anon): 127908 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 119304 kB' 'Mapped: 48020 kB' 'Shmem: 10464 kB' 'KReclaimable: 66808 kB' 'Slab: 140192 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73384 kB' 'KernelStack: 6248 kB' 'PageTables: 4008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.523 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7972080 kB' 'MemUsed: 4269896 kB' 'SwapCached: 0 kB' 'Active: 490332 kB' 'Inactive: 1351552 kB' 'Active(anon): 128132 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'FilePages: 1724216 kB' 'Mapped: 48020 kB' 'AnonPages: 119236 kB' 'Shmem: 10464 kB' 'KernelStack: 6248 kB' 'PageTables: 4004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66808 kB' 'Slab: 140192 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73384 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.524 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:58.525 node0=1024 expecting 1024 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:58.525 00:03:58.525 real 0m1.089s 00:03:58.525 user 0m0.521s 00:03:58.525 sys 0m0.593s 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.525 15:49:52 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:58.525 ************************************ 00:03:58.525 END TEST no_shrink_alloc 00:03:58.525 ************************************ 00:03:58.525 15:49:52 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:58.525 15:49:52 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:58.525 15:49:52 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:58.525 15:49:52 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:58.525 15:49:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:58.525 15:49:52 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:58.525 15:49:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:58.525 15:49:52 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:58.525 15:49:52 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:58.525 15:49:52 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:58.525 00:03:58.525 real 0m4.846s 00:03:58.525 user 0m2.261s 00:03:58.525 sys 0m2.497s 00:03:58.525 15:49:52 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.525 15:49:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:58.525 ************************************ 00:03:58.525 END TEST hugepages 00:03:58.525 ************************************ 00:03:58.783 15:49:52 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:58.783 15:49:52 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:58.783 15:49:52 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.783 15:49:52 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.783 15:49:52 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:58.783 ************************************ 00:03:58.783 START TEST driver 00:03:58.783 ************************************ 00:03:58.783 15:49:52 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:58.783 * Looking for test storage... 00:03:58.783 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:58.783 15:49:52 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:58.783 15:49:52 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:58.783 15:49:52 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:59.348 15:49:52 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:59.348 15:49:52 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.348 15:49:52 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.348 15:49:52 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:59.348 ************************************ 00:03:59.348 START TEST guess_driver 00:03:59.348 ************************************ 00:03:59.348 15:49:52 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:59.348 15:49:52 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:59.348 15:49:52 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:59.348 15:49:52 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:59.348 15:49:52 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:59.348 15:49:52 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:59.348 15:49:52 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:59.348 15:49:52 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:59.348 15:49:52 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:59.348 15:49:52 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:03:59.348 15:49:52 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:03:59.348 15:49:52 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:03:59.348 15:49:52 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:03:59.348 15:49:52 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:03:59.348 15:49:52 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:03:59.348 15:49:52 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:03:59.348 15:49:52 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:03:59.348 15:49:52 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:03:59.348 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:03:59.348 15:49:52 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:03:59.348 15:49:52 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:03:59.348 15:49:52 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:59.348 Looking for driver=uio_pci_generic 00:03:59.348 15:49:52 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:03:59.348 15:49:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.348 15:49:52 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:59.348 15:49:52 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.348 15:49:52 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:59.939 15:49:53 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:03:59.939 15:49:53 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:03:59.939 15:49:53 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.939 15:49:53 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.939 15:49:53 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:59.939 15:49:53 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.196 15:49:53 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.196 15:49:53 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:00.196 15:49:53 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.196 15:49:53 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:00.196 15:49:53 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:00.196 15:49:53 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:00.196 15:49:53 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:00.762 00:04:00.762 real 0m1.436s 00:04:00.762 user 0m0.538s 00:04:00.762 sys 0m0.922s 00:04:00.762 15:49:54 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:00.762 15:49:54 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:00.762 ************************************ 00:04:00.762 END TEST guess_driver 00:04:00.762 ************************************ 00:04:00.762 15:49:54 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:00.762 ************************************ 00:04:00.762 END TEST driver 00:04:00.762 ************************************ 00:04:00.762 00:04:00.762 real 0m2.101s 00:04:00.762 user 0m0.749s 00:04:00.762 sys 0m1.418s 00:04:00.762 15:49:54 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:00.762 15:49:54 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:00.762 15:49:54 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:00.762 15:49:54 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:00.762 15:49:54 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.762 15:49:54 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.762 15:49:54 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:00.762 ************************************ 00:04:00.762 START TEST devices 00:04:00.762 ************************************ 00:04:00.762 15:49:54 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:00.762 * Looking for test storage... 00:04:00.762 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:00.762 15:49:54 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:00.762 15:49:54 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:00.762 15:49:54 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:00.762 15:49:54 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:01.695 15:49:55 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:01.695 15:49:55 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:01.695 15:49:55 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:01.695 15:49:55 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:01.695 15:49:55 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:01.695 15:49:55 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:01.695 15:49:55 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:01.695 15:49:55 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:01.695 15:49:55 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:01.695 15:49:55 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:01.695 15:49:55 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:04:01.695 15:49:55 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:04:01.695 15:49:55 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:01.695 15:49:55 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:01.695 15:49:55 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:01.695 15:49:55 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:04:01.695 15:49:55 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:04:01.695 15:49:55 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:01.695 15:49:55 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:01.695 15:49:55 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:01.695 15:49:55 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:01.695 15:49:55 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:01.695 15:49:55 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:01.695 15:49:55 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:01.695 15:49:55 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:01.695 15:49:55 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:01.695 15:49:55 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:01.695 15:49:55 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:01.695 15:49:55 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:01.695 15:49:55 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:01.695 15:49:55 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:01.695 15:49:55 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:01.695 15:49:55 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:01.695 15:49:55 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:01.695 15:49:55 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:01.695 15:49:55 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:01.695 15:49:55 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:01.695 No valid GPT data, bailing 00:04:01.695 15:49:55 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:01.695 15:49:55 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:01.695 15:49:55 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:01.695 15:49:55 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:01.695 15:49:55 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:01.695 15:49:55 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:01.695 15:49:55 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:01.695 15:49:55 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:01.695 15:49:55 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:01.695 15:49:55 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:01.695 15:49:55 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:01.695 15:49:55 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:04:01.695 15:49:55 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:01.695 15:49:55 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:01.695 15:49:55 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:01.695 15:49:55 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:04:01.695 15:49:55 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:04:01.695 15:49:55 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:04:01.695 No valid GPT data, bailing 00:04:01.695 15:49:55 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:01.695 15:49:55 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:01.695 15:49:55 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:01.695 15:49:55 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:04:01.695 15:49:55 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:04:01.695 15:49:55 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:04:01.695 15:49:55 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:01.695 15:49:55 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:01.695 15:49:55 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:01.696 15:49:55 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:01.696 15:49:55 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:01.696 15:49:55 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:04:01.696 15:49:55 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:01.696 15:49:55 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:01.696 15:49:55 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:01.696 15:49:55 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:04:01.696 15:49:55 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:04:01.696 15:49:55 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:04:01.954 No valid GPT data, bailing 00:04:01.954 15:49:55 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:01.954 15:49:55 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:01.954 15:49:55 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:01.954 15:49:55 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:04:01.954 15:49:55 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:04:01.954 15:49:55 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:04:01.954 15:49:55 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:01.954 15:49:55 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:01.954 15:49:55 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:01.954 15:49:55 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:01.954 15:49:55 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:01.954 15:49:55 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:01.954 15:49:55 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:01.954 15:49:55 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:01.954 15:49:55 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:01.954 15:49:55 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:01.954 15:49:55 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:01.954 15:49:55 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:01.954 No valid GPT data, bailing 00:04:01.954 15:49:55 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:01.954 15:49:55 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:01.954 15:49:55 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:01.954 15:49:55 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:01.954 15:49:55 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:01.954 15:49:55 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:01.954 15:49:55 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:01.954 15:49:55 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:01.954 15:49:55 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:01.954 15:49:55 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:01.954 15:49:55 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:01.954 15:49:55 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:01.954 15:49:55 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:01.954 15:49:55 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.954 15:49:55 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.954 15:49:55 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:01.954 ************************************ 00:04:01.954 START TEST nvme_mount 00:04:01.954 ************************************ 00:04:01.954 15:49:55 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:01.954 15:49:55 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:01.954 15:49:55 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:01.954 15:49:55 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:01.954 15:49:55 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:01.954 15:49:55 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:01.954 15:49:55 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:01.954 15:49:55 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:01.954 15:49:55 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:01.954 15:49:55 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:01.954 15:49:55 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:01.954 15:49:55 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:01.954 15:49:55 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:01.954 15:49:55 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:01.954 15:49:55 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:01.954 15:49:55 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:01.954 15:49:55 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:01.954 15:49:55 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:01.954 15:49:55 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:01.955 15:49:55 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:02.887 Creating new GPT entries in memory. 00:04:02.887 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:02.887 other utilities. 00:04:02.887 15:49:56 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:02.887 15:49:56 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:02.887 15:49:56 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:02.887 15:49:56 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:02.887 15:49:56 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:04.260 Creating new GPT entries in memory. 00:04:04.260 The operation has completed successfully. 00:04:04.260 15:49:57 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:04.260 15:49:57 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:04.260 15:49:57 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 58921 00:04:04.260 15:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:04.260 15:49:57 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:04.260 15:49:57 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:04.260 15:49:57 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:04.260 15:49:57 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:04.260 15:49:57 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:04.260 15:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:04.260 15:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:04.260 15:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:04.260 15:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:04.260 15:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:04.260 15:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:04.260 15:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:04.260 15:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:04.260 15:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:04.260 15:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.260 15:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:04.260 15:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:04.260 15:49:57 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.260 15:49:57 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:04.260 15:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:04.260 15:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:04.260 15:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:04.260 15:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.260 15:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:04.260 15:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.518 15:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:04.518 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.518 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:04.518 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.518 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:04.518 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:04.518 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:04.518 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:04.518 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:04.518 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:04.518 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:04.518 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:04.518 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:04.518 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:04.518 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:04.518 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:04.518 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:04.777 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:04.777 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:04.777 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:04.777 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:04.777 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:04.777 15:49:58 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:04.777 15:49:58 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:04.777 15:49:58 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:04.777 15:49:58 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:04.777 15:49:58 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:04.777 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:04.777 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:04.777 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:04.777 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:04.777 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:04.777 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:04.777 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:04.777 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:04.777 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:04.777 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.777 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:04.777 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:04.777 15:49:58 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.777 15:49:58 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:05.036 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:05.036 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:05.036 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:05.036 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.036 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:05.036 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.294 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:05.294 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.294 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:05.294 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.294 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:05.294 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:05.294 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:05.294 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:05.294 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:05.294 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:05.294 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:05.294 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:05.294 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:05.294 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:05.294 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:05.294 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:05.294 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:05.294 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:05.294 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.294 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:05.294 15:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:05.294 15:49:58 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.294 15:49:58 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:05.552 15:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:05.552 15:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:05.552 15:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:05.552 15:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.552 15:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:05.552 15:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.810 15:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:05.810 15:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.810 15:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:05.810 15:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.810 15:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:05.810 15:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:05.810 15:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:05.810 15:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:05.810 15:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:05.810 15:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:05.810 15:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:05.810 15:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:05.810 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:05.810 00:04:05.810 real 0m3.887s 00:04:05.810 user 0m0.581s 00:04:05.810 sys 0m1.027s 00:04:05.810 15:49:59 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.810 15:49:59 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:05.810 ************************************ 00:04:05.810 END TEST nvme_mount 00:04:05.810 ************************************ 00:04:05.810 15:49:59 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:05.810 15:49:59 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:05.810 15:49:59 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.810 15:49:59 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.810 15:49:59 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:05.810 ************************************ 00:04:05.810 START TEST dm_mount 00:04:05.810 ************************************ 00:04:05.810 15:49:59 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:05.810 15:49:59 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:05.811 15:49:59 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:05.811 15:49:59 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:05.811 15:49:59 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:05.811 15:49:59 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:05.811 15:49:59 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:05.811 15:49:59 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:05.811 15:49:59 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:05.811 15:49:59 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:05.811 15:49:59 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:05.811 15:49:59 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:05.811 15:49:59 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:05.811 15:49:59 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:05.811 15:49:59 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:05.811 15:49:59 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:05.811 15:49:59 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:05.811 15:49:59 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:05.811 15:49:59 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:05.811 15:49:59 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:05.811 15:49:59 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:05.811 15:49:59 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:07.225 Creating new GPT entries in memory. 00:04:07.225 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:07.225 other utilities. 00:04:07.225 15:50:00 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:07.225 15:50:00 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:07.225 15:50:00 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:07.225 15:50:00 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:07.225 15:50:00 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:08.156 Creating new GPT entries in memory. 00:04:08.156 The operation has completed successfully. 00:04:08.156 15:50:01 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:08.156 15:50:01 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:08.156 15:50:01 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:08.156 15:50:01 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:08.156 15:50:01 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:09.089 The operation has completed successfully. 00:04:09.089 15:50:02 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:09.089 15:50:02 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:09.089 15:50:02 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 59349 00:04:09.089 15:50:02 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:09.089 15:50:02 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:09.089 15:50:02 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:09.089 15:50:02 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:09.089 15:50:02 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:09.089 15:50:02 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:09.089 15:50:02 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:09.089 15:50:02 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:09.089 15:50:02 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:09.089 15:50:02 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:09.089 15:50:02 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:09.089 15:50:02 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:09.089 15:50:02 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:09.089 15:50:02 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:09.089 15:50:02 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:09.089 15:50:02 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:09.089 15:50:02 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:09.089 15:50:02 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:09.089 15:50:02 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:09.089 15:50:02 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:09.089 15:50:02 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:09.089 15:50:02 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:09.089 15:50:02 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:09.089 15:50:02 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:09.089 15:50:02 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:09.089 15:50:02 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:09.089 15:50:02 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:09.089 15:50:02 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:09.089 15:50:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.089 15:50:02 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:09.089 15:50:02 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:09.089 15:50:02 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.089 15:50:02 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:09.346 15:50:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:09.346 15:50:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:09.346 15:50:02 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:09.346 15:50:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.346 15:50:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:09.346 15:50:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.346 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:09.346 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.603 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:09.603 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.603 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:09.603 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:09.603 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:09.603 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:09.603 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:09.603 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:09.603 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:09.603 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:09.603 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:09.603 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:09.603 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:09.603 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:09.603 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:09.603 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:09.603 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.603 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:09.603 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:09.603 15:50:03 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.603 15:50:03 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:09.861 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:09.861 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:09.861 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:09.861 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.861 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:09.861 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.861 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:09.861 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.117 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:10.117 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.117 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:10.117 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:10.117 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:10.117 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:10.117 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:10.117 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:10.117 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:10.117 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:10.117 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:10.117 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:10.117 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:10.117 15:50:03 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:10.117 00:04:10.117 real 0m4.187s 00:04:10.117 user 0m0.439s 00:04:10.117 sys 0m0.723s 00:04:10.117 15:50:03 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.117 15:50:03 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:10.117 ************************************ 00:04:10.117 END TEST dm_mount 00:04:10.117 ************************************ 00:04:10.117 15:50:03 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:10.117 15:50:03 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:10.117 15:50:03 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:10.117 15:50:03 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:10.117 15:50:03 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:10.117 15:50:03 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:10.117 15:50:03 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:10.117 15:50:03 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:10.374 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:10.374 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:10.374 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:10.374 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:10.374 15:50:04 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:10.374 15:50:04 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:10.374 15:50:04 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:10.374 15:50:04 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:10.374 15:50:04 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:10.374 15:50:04 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:10.374 15:50:04 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:10.374 00:04:10.374 real 0m9.643s 00:04:10.374 user 0m1.740s 00:04:10.374 sys 0m2.322s 00:04:10.374 15:50:04 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.374 15:50:04 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:10.374 ************************************ 00:04:10.374 END TEST devices 00:04:10.374 ************************************ 00:04:10.374 15:50:04 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:10.374 00:04:10.374 real 0m21.520s 00:04:10.374 user 0m6.885s 00:04:10.374 sys 0m8.945s 00:04:10.374 15:50:04 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.374 15:50:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:10.374 ************************************ 00:04:10.374 END TEST setup.sh 00:04:10.374 ************************************ 00:04:10.631 15:50:04 -- common/autotest_common.sh@1142 -- # return 0 00:04:10.631 15:50:04 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:11.196 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:11.196 Hugepages 00:04:11.196 node hugesize free / total 00:04:11.196 node0 1048576kB 0 / 0 00:04:11.196 node0 2048kB 2048 / 2048 00:04:11.196 00:04:11.196 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:11.196 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:11.196 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:11.454 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:11.454 15:50:04 -- spdk/autotest.sh@130 -- # uname -s 00:04:11.454 15:50:04 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:11.454 15:50:04 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:11.454 15:50:04 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:12.019 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:12.019 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:12.020 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:12.278 15:50:05 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:13.213 15:50:06 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:13.213 15:50:06 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:13.213 15:50:06 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:13.213 15:50:06 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:13.213 15:50:06 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:13.213 15:50:06 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:13.213 15:50:06 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:13.213 15:50:06 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:13.213 15:50:06 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:13.213 15:50:06 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:13.213 15:50:06 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:13.213 15:50:06 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:13.472 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:13.472 Waiting for block devices as requested 00:04:13.730 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:13.730 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:13.730 15:50:07 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:13.730 15:50:07 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:13.730 15:50:07 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:13.730 15:50:07 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:13.730 15:50:07 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:13.730 15:50:07 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:13.730 15:50:07 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:13.730 15:50:07 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:13.730 15:50:07 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:13.730 15:50:07 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:13.730 15:50:07 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:13.730 15:50:07 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:13.730 15:50:07 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:13.730 15:50:07 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:13.730 15:50:07 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:13.730 15:50:07 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:13.730 15:50:07 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:04:13.730 15:50:07 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:13.730 15:50:07 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:13.730 15:50:07 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:13.731 15:50:07 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:13.731 15:50:07 -- common/autotest_common.sh@1557 -- # continue 00:04:13.731 15:50:07 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:13.731 15:50:07 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:13.731 15:50:07 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:04:13.731 15:50:07 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:13.731 15:50:07 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:13.731 15:50:07 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:13.731 15:50:07 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:13.731 15:50:07 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:13.731 15:50:07 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:13.731 15:50:07 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:13.731 15:50:07 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:13.731 15:50:07 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:13.731 15:50:07 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:13.989 15:50:07 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:13.989 15:50:07 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:13.989 15:50:07 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:13.989 15:50:07 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:13.989 15:50:07 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:13.989 15:50:07 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:13.989 15:50:07 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:13.989 15:50:07 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:13.989 15:50:07 -- common/autotest_common.sh@1557 -- # continue 00:04:13.989 15:50:07 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:13.989 15:50:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:13.989 15:50:07 -- common/autotest_common.sh@10 -- # set +x 00:04:13.989 15:50:07 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:13.989 15:50:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:13.989 15:50:07 -- common/autotest_common.sh@10 -- # set +x 00:04:13.989 15:50:07 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:14.555 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:14.555 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:14.813 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:14.813 15:50:08 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:14.813 15:50:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:14.813 15:50:08 -- common/autotest_common.sh@10 -- # set +x 00:04:14.813 15:50:08 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:14.813 15:50:08 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:14.813 15:50:08 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:14.813 15:50:08 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:14.813 15:50:08 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:14.813 15:50:08 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:14.813 15:50:08 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:14.813 15:50:08 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:14.813 15:50:08 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:14.813 15:50:08 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:14.813 15:50:08 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:14.813 15:50:08 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:14.813 15:50:08 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:14.813 15:50:08 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:14.814 15:50:08 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:14.814 15:50:08 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:14.814 15:50:08 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:14.814 15:50:08 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:14.814 15:50:08 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:14.814 15:50:08 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:14.814 15:50:08 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:14.814 15:50:08 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:14.814 15:50:08 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:14.814 15:50:08 -- common/autotest_common.sh@1593 -- # return 0 00:04:14.814 15:50:08 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:14.814 15:50:08 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:14.814 15:50:08 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:14.814 15:50:08 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:14.814 15:50:08 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:14.814 15:50:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:14.814 15:50:08 -- common/autotest_common.sh@10 -- # set +x 00:04:14.814 15:50:08 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:14.814 15:50:08 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:14.814 15:50:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.814 15:50:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.814 15:50:08 -- common/autotest_common.sh@10 -- # set +x 00:04:14.814 ************************************ 00:04:14.814 START TEST env 00:04:14.814 ************************************ 00:04:14.814 15:50:08 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:15.073 * Looking for test storage... 00:04:15.073 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:15.073 15:50:08 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:15.073 15:50:08 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:15.073 15:50:08 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.073 15:50:08 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.073 ************************************ 00:04:15.073 START TEST env_memory 00:04:15.073 ************************************ 00:04:15.073 15:50:08 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:15.073 00:04:15.073 00:04:15.073 CUnit - A unit testing framework for C - Version 2.1-3 00:04:15.073 http://cunit.sourceforge.net/ 00:04:15.073 00:04:15.073 00:04:15.073 Suite: memory 00:04:15.073 Test: alloc and free memory map ...[2024-07-15 15:50:08.649776] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:15.073 passed 00:04:15.073 Test: mem map translation ...[2024-07-15 15:50:08.687789] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:15.073 [2024-07-15 15:50:08.688180] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:15.073 [2024-07-15 15:50:08.688418] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:15.073 [2024-07-15 15:50:08.688592] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:15.073 passed 00:04:15.073 Test: mem map registration ...[2024-07-15 15:50:08.763221] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:15.073 [2024-07-15 15:50:08.763649] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:15.073 passed 00:04:15.331 Test: mem map adjacent registrations ...passed 00:04:15.331 00:04:15.331 Run Summary: Type Total Ran Passed Failed Inactive 00:04:15.331 suites 1 1 n/a 0 0 00:04:15.331 tests 4 4 4 0 0 00:04:15.331 asserts 152 152 152 0 n/a 00:04:15.331 00:04:15.331 Elapsed time = 0.248 seconds 00:04:15.331 00:04:15.331 real 0m0.271s 00:04:15.331 user 0m0.247s 00:04:15.331 sys 0m0.017s 00:04:15.331 15:50:08 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:15.331 15:50:08 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:15.331 ************************************ 00:04:15.331 END TEST env_memory 00:04:15.331 ************************************ 00:04:15.331 15:50:08 env -- common/autotest_common.sh@1142 -- # return 0 00:04:15.331 15:50:08 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:15.331 15:50:08 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:15.331 15:50:08 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.331 15:50:08 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.331 ************************************ 00:04:15.331 START TEST env_vtophys 00:04:15.331 ************************************ 00:04:15.331 15:50:08 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:15.331 EAL: lib.eal log level changed from notice to debug 00:04:15.331 EAL: Detected lcore 0 as core 0 on socket 0 00:04:15.331 EAL: Detected lcore 1 as core 0 on socket 0 00:04:15.331 EAL: Detected lcore 2 as core 0 on socket 0 00:04:15.332 EAL: Detected lcore 3 as core 0 on socket 0 00:04:15.332 EAL: Detected lcore 4 as core 0 on socket 0 00:04:15.332 EAL: Detected lcore 5 as core 0 on socket 0 00:04:15.332 EAL: Detected lcore 6 as core 0 on socket 0 00:04:15.332 EAL: Detected lcore 7 as core 0 on socket 0 00:04:15.332 EAL: Detected lcore 8 as core 0 on socket 0 00:04:15.332 EAL: Detected lcore 9 as core 0 on socket 0 00:04:15.332 EAL: Maximum logical cores by configuration: 128 00:04:15.332 EAL: Detected CPU lcores: 10 00:04:15.332 EAL: Detected NUMA nodes: 1 00:04:15.332 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:15.332 EAL: Detected shared linkage of DPDK 00:04:15.332 EAL: No shared files mode enabled, IPC will be disabled 00:04:15.332 EAL: Selected IOVA mode 'PA' 00:04:15.332 EAL: Probing VFIO support... 00:04:15.332 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:15.332 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:15.332 EAL: Ask a virtual area of 0x2e000 bytes 00:04:15.332 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:15.332 EAL: Setting up physically contiguous memory... 00:04:15.332 EAL: Setting maximum number of open files to 524288 00:04:15.332 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:15.332 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:15.332 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.332 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:15.332 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.332 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.332 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:15.332 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:15.332 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.332 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:15.332 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.332 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.332 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:15.332 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:15.332 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.332 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:15.332 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.332 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.332 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:15.332 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:15.332 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.332 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:15.332 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.332 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.332 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:15.332 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:15.332 EAL: Hugepages will be freed exactly as allocated. 00:04:15.332 EAL: No shared files mode enabled, IPC is disabled 00:04:15.332 EAL: No shared files mode enabled, IPC is disabled 00:04:15.332 EAL: TSC frequency is ~2200000 KHz 00:04:15.332 EAL: Main lcore 0 is ready (tid=7fb407897a00;cpuset=[0]) 00:04:15.332 EAL: Trying to obtain current memory policy. 00:04:15.332 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.332 EAL: Restoring previous memory policy: 0 00:04:15.332 EAL: request: mp_malloc_sync 00:04:15.332 EAL: No shared files mode enabled, IPC is disabled 00:04:15.332 EAL: Heap on socket 0 was expanded by 2MB 00:04:15.332 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:15.332 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:15.332 EAL: Mem event callback 'spdk:(nil)' registered 00:04:15.332 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:15.590 00:04:15.590 00:04:15.590 CUnit - A unit testing framework for C - Version 2.1-3 00:04:15.590 http://cunit.sourceforge.net/ 00:04:15.590 00:04:15.590 00:04:15.590 Suite: components_suite 00:04:15.590 Test: vtophys_malloc_test ...passed 00:04:15.590 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:15.590 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.590 EAL: Restoring previous memory policy: 4 00:04:15.590 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.590 EAL: request: mp_malloc_sync 00:04:15.590 EAL: No shared files mode enabled, IPC is disabled 00:04:15.590 EAL: Heap on socket 0 was expanded by 4MB 00:04:15.590 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.590 EAL: request: mp_malloc_sync 00:04:15.590 EAL: No shared files mode enabled, IPC is disabled 00:04:15.590 EAL: Heap on socket 0 was shrunk by 4MB 00:04:15.590 EAL: Trying to obtain current memory policy. 00:04:15.590 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.590 EAL: Restoring previous memory policy: 4 00:04:15.590 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.590 EAL: request: mp_malloc_sync 00:04:15.590 EAL: No shared files mode enabled, IPC is disabled 00:04:15.590 EAL: Heap on socket 0 was expanded by 6MB 00:04:15.590 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.590 EAL: request: mp_malloc_sync 00:04:15.590 EAL: No shared files mode enabled, IPC is disabled 00:04:15.590 EAL: Heap on socket 0 was shrunk by 6MB 00:04:15.590 EAL: Trying to obtain current memory policy. 00:04:15.590 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.590 EAL: Restoring previous memory policy: 4 00:04:15.590 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.590 EAL: request: mp_malloc_sync 00:04:15.590 EAL: No shared files mode enabled, IPC is disabled 00:04:15.590 EAL: Heap on socket 0 was expanded by 10MB 00:04:15.590 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.590 EAL: request: mp_malloc_sync 00:04:15.590 EAL: No shared files mode enabled, IPC is disabled 00:04:15.590 EAL: Heap on socket 0 was shrunk by 10MB 00:04:15.590 EAL: Trying to obtain current memory policy. 00:04:15.590 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.590 EAL: Restoring previous memory policy: 4 00:04:15.590 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.590 EAL: request: mp_malloc_sync 00:04:15.590 EAL: No shared files mode enabled, IPC is disabled 00:04:15.590 EAL: Heap on socket 0 was expanded by 18MB 00:04:15.590 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.590 EAL: request: mp_malloc_sync 00:04:15.590 EAL: No shared files mode enabled, IPC is disabled 00:04:15.590 EAL: Heap on socket 0 was shrunk by 18MB 00:04:15.590 EAL: Trying to obtain current memory policy. 00:04:15.590 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.590 EAL: Restoring previous memory policy: 4 00:04:15.590 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.590 EAL: request: mp_malloc_sync 00:04:15.590 EAL: No shared files mode enabled, IPC is disabled 00:04:15.590 EAL: Heap on socket 0 was expanded by 34MB 00:04:15.590 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.590 EAL: request: mp_malloc_sync 00:04:15.590 EAL: No shared files mode enabled, IPC is disabled 00:04:15.590 EAL: Heap on socket 0 was shrunk by 34MB 00:04:15.590 EAL: Trying to obtain current memory policy. 00:04:15.590 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.590 EAL: Restoring previous memory policy: 4 00:04:15.590 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.590 EAL: request: mp_malloc_sync 00:04:15.590 EAL: No shared files mode enabled, IPC is disabled 00:04:15.590 EAL: Heap on socket 0 was expanded by 66MB 00:04:15.590 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.590 EAL: request: mp_malloc_sync 00:04:15.590 EAL: No shared files mode enabled, IPC is disabled 00:04:15.590 EAL: Heap on socket 0 was shrunk by 66MB 00:04:15.590 EAL: Trying to obtain current memory policy. 00:04:15.590 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.590 EAL: Restoring previous memory policy: 4 00:04:15.590 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.590 EAL: request: mp_malloc_sync 00:04:15.590 EAL: No shared files mode enabled, IPC is disabled 00:04:15.590 EAL: Heap on socket 0 was expanded by 130MB 00:04:15.590 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.590 EAL: request: mp_malloc_sync 00:04:15.590 EAL: No shared files mode enabled, IPC is disabled 00:04:15.590 EAL: Heap on socket 0 was shrunk by 130MB 00:04:15.590 EAL: Trying to obtain current memory policy. 00:04:15.590 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.590 EAL: Restoring previous memory policy: 4 00:04:15.590 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.590 EAL: request: mp_malloc_sync 00:04:15.590 EAL: No shared files mode enabled, IPC is disabled 00:04:15.590 EAL: Heap on socket 0 was expanded by 258MB 00:04:15.848 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.848 EAL: request: mp_malloc_sync 00:04:15.848 EAL: No shared files mode enabled, IPC is disabled 00:04:15.848 EAL: Heap on socket 0 was shrunk by 258MB 00:04:15.848 EAL: Trying to obtain current memory policy. 00:04:15.848 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.848 EAL: Restoring previous memory policy: 4 00:04:15.848 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.848 EAL: request: mp_malloc_sync 00:04:15.848 EAL: No shared files mode enabled, IPC is disabled 00:04:15.848 EAL: Heap on socket 0 was expanded by 514MB 00:04:16.106 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.106 EAL: request: mp_malloc_sync 00:04:16.106 EAL: No shared files mode enabled, IPC is disabled 00:04:16.106 EAL: Heap on socket 0 was shrunk by 514MB 00:04:16.106 EAL: Trying to obtain current memory policy. 00:04:16.106 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.365 EAL: Restoring previous memory policy: 4 00:04:16.365 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.365 EAL: request: mp_malloc_sync 00:04:16.365 EAL: No shared files mode enabled, IPC is disabled 00:04:16.365 EAL: Heap on socket 0 was expanded by 1026MB 00:04:16.624 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.883 EAL: request: mp_malloc_sync 00:04:16.883 EAL: No shared files mode enabled, IPC is disabled 00:04:16.883 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:16.883 passed 00:04:16.883 00:04:16.883 Run Summary: Type Total Ran Passed Failed Inactive 00:04:16.883 suites 1 1 n/a 0 0 00:04:16.883 tests 2 2 2 0 0 00:04:16.883 asserts 5316 5316 5316 0 n/a 00:04:16.883 00:04:16.883 Elapsed time = 1.268 seconds 00:04:16.883 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.883 EAL: request: mp_malloc_sync 00:04:16.883 EAL: No shared files mode enabled, IPC is disabled 00:04:16.883 EAL: Heap on socket 0 was shrunk by 2MB 00:04:16.883 EAL: No shared files mode enabled, IPC is disabled 00:04:16.883 EAL: No shared files mode enabled, IPC is disabled 00:04:16.883 EAL: No shared files mode enabled, IPC is disabled 00:04:16.883 00:04:16.883 real 0m1.471s 00:04:16.883 user 0m0.787s 00:04:16.883 sys 0m0.546s 00:04:16.883 15:50:10 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:16.883 15:50:10 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:16.883 ************************************ 00:04:16.883 END TEST env_vtophys 00:04:16.883 ************************************ 00:04:16.883 15:50:10 env -- common/autotest_common.sh@1142 -- # return 0 00:04:16.883 15:50:10 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:16.883 15:50:10 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:16.883 15:50:10 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.883 15:50:10 env -- common/autotest_common.sh@10 -- # set +x 00:04:16.883 ************************************ 00:04:16.883 START TEST env_pci 00:04:16.883 ************************************ 00:04:16.883 15:50:10 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:16.883 00:04:16.883 00:04:16.883 CUnit - A unit testing framework for C - Version 2.1-3 00:04:16.883 http://cunit.sourceforge.net/ 00:04:16.883 00:04:16.883 00:04:16.883 Suite: pci 00:04:16.883 Test: pci_hook ...[2024-07-15 15:50:10.436043] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 60542 has claimed it 00:04:16.883 passed 00:04:16.883 00:04:16.883 EAL: Cannot find device (10000:00:01.0) 00:04:16.883 EAL: Failed to attach device on primary process 00:04:16.883 Run Summary: Type Total Ran Passed Failed Inactive 00:04:16.883 suites 1 1 n/a 0 0 00:04:16.883 tests 1 1 1 0 0 00:04:16.883 asserts 25 25 25 0 n/a 00:04:16.883 00:04:16.883 Elapsed time = 0.003 seconds 00:04:16.883 00:04:16.883 real 0m0.020s 00:04:16.883 user 0m0.008s 00:04:16.883 sys 0m0.012s 00:04:16.883 15:50:10 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:16.883 15:50:10 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:16.883 ************************************ 00:04:16.883 END TEST env_pci 00:04:16.883 ************************************ 00:04:16.883 15:50:10 env -- common/autotest_common.sh@1142 -- # return 0 00:04:16.883 15:50:10 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:16.883 15:50:10 env -- env/env.sh@15 -- # uname 00:04:16.883 15:50:10 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:16.883 15:50:10 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:16.883 15:50:10 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:16.883 15:50:10 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:16.883 15:50:10 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.883 15:50:10 env -- common/autotest_common.sh@10 -- # set +x 00:04:16.883 ************************************ 00:04:16.883 START TEST env_dpdk_post_init 00:04:16.883 ************************************ 00:04:16.883 15:50:10 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:16.883 EAL: Detected CPU lcores: 10 00:04:16.883 EAL: Detected NUMA nodes: 1 00:04:16.883 EAL: Detected shared linkage of DPDK 00:04:16.883 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:16.883 EAL: Selected IOVA mode 'PA' 00:04:17.142 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:17.142 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:17.142 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:17.142 Starting DPDK initialization... 00:04:17.142 Starting SPDK post initialization... 00:04:17.142 SPDK NVMe probe 00:04:17.142 Attaching to 0000:00:10.0 00:04:17.142 Attaching to 0000:00:11.0 00:04:17.142 Attached to 0000:00:10.0 00:04:17.142 Attached to 0000:00:11.0 00:04:17.142 Cleaning up... 00:04:17.142 00:04:17.142 real 0m0.185s 00:04:17.142 user 0m0.045s 00:04:17.142 sys 0m0.040s 00:04:17.142 15:50:10 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.142 15:50:10 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:17.142 ************************************ 00:04:17.142 END TEST env_dpdk_post_init 00:04:17.142 ************************************ 00:04:17.142 15:50:10 env -- common/autotest_common.sh@1142 -- # return 0 00:04:17.142 15:50:10 env -- env/env.sh@26 -- # uname 00:04:17.142 15:50:10 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:17.142 15:50:10 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:17.142 15:50:10 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.142 15:50:10 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.142 15:50:10 env -- common/autotest_common.sh@10 -- # set +x 00:04:17.142 ************************************ 00:04:17.142 START TEST env_mem_callbacks 00:04:17.142 ************************************ 00:04:17.142 15:50:10 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:17.142 EAL: Detected CPU lcores: 10 00:04:17.142 EAL: Detected NUMA nodes: 1 00:04:17.142 EAL: Detected shared linkage of DPDK 00:04:17.142 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:17.142 EAL: Selected IOVA mode 'PA' 00:04:17.400 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:17.400 00:04:17.400 00:04:17.400 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.400 http://cunit.sourceforge.net/ 00:04:17.400 00:04:17.400 00:04:17.400 Suite: memory 00:04:17.400 Test: test ... 00:04:17.400 register 0x200000200000 2097152 00:04:17.400 malloc 3145728 00:04:17.400 register 0x200000400000 4194304 00:04:17.400 buf 0x200000500000 len 3145728 PASSED 00:04:17.400 malloc 64 00:04:17.400 buf 0x2000004fff40 len 64 PASSED 00:04:17.400 malloc 4194304 00:04:17.400 register 0x200000800000 6291456 00:04:17.400 buf 0x200000a00000 len 4194304 PASSED 00:04:17.400 free 0x200000500000 3145728 00:04:17.400 free 0x2000004fff40 64 00:04:17.400 unregister 0x200000400000 4194304 PASSED 00:04:17.400 free 0x200000a00000 4194304 00:04:17.400 unregister 0x200000800000 6291456 PASSED 00:04:17.400 malloc 8388608 00:04:17.400 register 0x200000400000 10485760 00:04:17.400 buf 0x200000600000 len 8388608 PASSED 00:04:17.400 free 0x200000600000 8388608 00:04:17.400 unregister 0x200000400000 10485760 PASSED 00:04:17.400 passed 00:04:17.400 00:04:17.400 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.400 suites 1 1 n/a 0 0 00:04:17.400 tests 1 1 1 0 0 00:04:17.400 asserts 15 15 15 0 n/a 00:04:17.400 00:04:17.400 Elapsed time = 0.008 seconds 00:04:17.400 00:04:17.400 real 0m0.148s 00:04:17.400 user 0m0.019s 00:04:17.400 sys 0m0.027s 00:04:17.400 15:50:10 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.400 15:50:10 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:17.400 ************************************ 00:04:17.400 END TEST env_mem_callbacks 00:04:17.400 ************************************ 00:04:17.400 15:50:10 env -- common/autotest_common.sh@1142 -- # return 0 00:04:17.400 00:04:17.400 real 0m2.420s 00:04:17.400 user 0m1.214s 00:04:17.400 sys 0m0.844s 00:04:17.400 15:50:10 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.400 ************************************ 00:04:17.400 15:50:10 env -- common/autotest_common.sh@10 -- # set +x 00:04:17.400 END TEST env 00:04:17.400 ************************************ 00:04:17.400 15:50:10 -- common/autotest_common.sh@1142 -- # return 0 00:04:17.400 15:50:10 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:17.400 15:50:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.400 15:50:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.400 15:50:10 -- common/autotest_common.sh@10 -- # set +x 00:04:17.400 ************************************ 00:04:17.400 START TEST rpc 00:04:17.400 ************************************ 00:04:17.400 15:50:10 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:17.400 * Looking for test storage... 00:04:17.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:17.400 15:50:11 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:17.400 15:50:11 rpc -- rpc/rpc.sh@65 -- # spdk_pid=60657 00:04:17.400 15:50:11 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:17.400 15:50:11 rpc -- rpc/rpc.sh@67 -- # waitforlisten 60657 00:04:17.400 15:50:11 rpc -- common/autotest_common.sh@829 -- # '[' -z 60657 ']' 00:04:17.400 15:50:11 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:17.400 15:50:11 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:17.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:17.400 15:50:11 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:17.400 15:50:11 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:17.400 15:50:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.658 [2024-07-15 15:50:11.132817] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:04:17.658 [2024-07-15 15:50:11.132933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60657 ] 00:04:17.658 [2024-07-15 15:50:11.269291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.916 [2024-07-15 15:50:11.411418] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:17.916 [2024-07-15 15:50:11.411508] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 60657' to capture a snapshot of events at runtime. 00:04:17.916 [2024-07-15 15:50:11.411523] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:17.916 [2024-07-15 15:50:11.411532] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:17.916 [2024-07-15 15:50:11.411540] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid60657 for offline analysis/debug. 00:04:17.916 [2024-07-15 15:50:11.411596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.508 15:50:12 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:18.508 15:50:12 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:18.508 15:50:12 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:18.508 15:50:12 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:18.508 15:50:12 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:18.508 15:50:12 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:18.508 15:50:12 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.508 15:50:12 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.508 15:50:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.508 ************************************ 00:04:18.508 START TEST rpc_integrity 00:04:18.508 ************************************ 00:04:18.508 15:50:12 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:18.508 15:50:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:18.508 15:50:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.508 15:50:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.508 15:50:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.508 15:50:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:18.508 15:50:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:18.767 15:50:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:18.767 15:50:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:18.767 15:50:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.767 15:50:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.767 15:50:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.767 15:50:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:18.767 15:50:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:18.767 15:50:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.767 15:50:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.767 15:50:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.767 15:50:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:18.767 { 00:04:18.767 "aliases": [ 00:04:18.767 "6ecb708e-abc8-4fc6-a421-9b88911c6ff3" 00:04:18.767 ], 00:04:18.767 "assigned_rate_limits": { 00:04:18.767 "r_mbytes_per_sec": 0, 00:04:18.767 "rw_ios_per_sec": 0, 00:04:18.767 "rw_mbytes_per_sec": 0, 00:04:18.767 "w_mbytes_per_sec": 0 00:04:18.767 }, 00:04:18.767 "block_size": 512, 00:04:18.767 "claimed": false, 00:04:18.767 "driver_specific": {}, 00:04:18.767 "memory_domains": [ 00:04:18.767 { 00:04:18.767 "dma_device_id": "system", 00:04:18.767 "dma_device_type": 1 00:04:18.767 }, 00:04:18.767 { 00:04:18.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:18.767 "dma_device_type": 2 00:04:18.767 } 00:04:18.767 ], 00:04:18.767 "name": "Malloc0", 00:04:18.767 "num_blocks": 16384, 00:04:18.767 "product_name": "Malloc disk", 00:04:18.767 "supported_io_types": { 00:04:18.767 "abort": true, 00:04:18.767 "compare": false, 00:04:18.767 "compare_and_write": false, 00:04:18.767 "copy": true, 00:04:18.767 "flush": true, 00:04:18.767 "get_zone_info": false, 00:04:18.767 "nvme_admin": false, 00:04:18.767 "nvme_io": false, 00:04:18.767 "nvme_io_md": false, 00:04:18.767 "nvme_iov_md": false, 00:04:18.767 "read": true, 00:04:18.767 "reset": true, 00:04:18.767 "seek_data": false, 00:04:18.767 "seek_hole": false, 00:04:18.767 "unmap": true, 00:04:18.767 "write": true, 00:04:18.767 "write_zeroes": true, 00:04:18.767 "zcopy": true, 00:04:18.767 "zone_append": false, 00:04:18.767 "zone_management": false 00:04:18.767 }, 00:04:18.767 "uuid": "6ecb708e-abc8-4fc6-a421-9b88911c6ff3", 00:04:18.767 "zoned": false 00:04:18.767 } 00:04:18.767 ]' 00:04:18.767 15:50:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:18.767 15:50:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:18.767 15:50:12 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:18.767 15:50:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.767 15:50:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.767 [2024-07-15 15:50:12.327916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:18.767 [2024-07-15 15:50:12.328034] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:18.767 [2024-07-15 15:50:12.328071] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e56af0 00:04:18.767 [2024-07-15 15:50:12.328092] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:18.767 [2024-07-15 15:50:12.330021] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:18.767 [2024-07-15 15:50:12.330071] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:18.767 Passthru0 00:04:18.767 15:50:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.767 15:50:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:18.767 15:50:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.767 15:50:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.767 15:50:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.767 15:50:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:18.767 { 00:04:18.767 "aliases": [ 00:04:18.767 "6ecb708e-abc8-4fc6-a421-9b88911c6ff3" 00:04:18.767 ], 00:04:18.767 "assigned_rate_limits": { 00:04:18.767 "r_mbytes_per_sec": 0, 00:04:18.767 "rw_ios_per_sec": 0, 00:04:18.767 "rw_mbytes_per_sec": 0, 00:04:18.767 "w_mbytes_per_sec": 0 00:04:18.767 }, 00:04:18.767 "block_size": 512, 00:04:18.767 "claim_type": "exclusive_write", 00:04:18.767 "claimed": true, 00:04:18.767 "driver_specific": {}, 00:04:18.767 "memory_domains": [ 00:04:18.767 { 00:04:18.767 "dma_device_id": "system", 00:04:18.767 "dma_device_type": 1 00:04:18.767 }, 00:04:18.767 { 00:04:18.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:18.767 "dma_device_type": 2 00:04:18.767 } 00:04:18.767 ], 00:04:18.767 "name": "Malloc0", 00:04:18.767 "num_blocks": 16384, 00:04:18.767 "product_name": "Malloc disk", 00:04:18.767 "supported_io_types": { 00:04:18.767 "abort": true, 00:04:18.767 "compare": false, 00:04:18.767 "compare_and_write": false, 00:04:18.767 "copy": true, 00:04:18.767 "flush": true, 00:04:18.767 "get_zone_info": false, 00:04:18.767 "nvme_admin": false, 00:04:18.767 "nvme_io": false, 00:04:18.767 "nvme_io_md": false, 00:04:18.767 "nvme_iov_md": false, 00:04:18.767 "read": true, 00:04:18.767 "reset": true, 00:04:18.767 "seek_data": false, 00:04:18.767 "seek_hole": false, 00:04:18.767 "unmap": true, 00:04:18.767 "write": true, 00:04:18.767 "write_zeroes": true, 00:04:18.767 "zcopy": true, 00:04:18.767 "zone_append": false, 00:04:18.767 "zone_management": false 00:04:18.767 }, 00:04:18.767 "uuid": "6ecb708e-abc8-4fc6-a421-9b88911c6ff3", 00:04:18.767 "zoned": false 00:04:18.767 }, 00:04:18.767 { 00:04:18.767 "aliases": [ 00:04:18.767 "32b01ba8-2dec-52f7-8927-cb83c8cc9516" 00:04:18.767 ], 00:04:18.767 "assigned_rate_limits": { 00:04:18.767 "r_mbytes_per_sec": 0, 00:04:18.767 "rw_ios_per_sec": 0, 00:04:18.767 "rw_mbytes_per_sec": 0, 00:04:18.767 "w_mbytes_per_sec": 0 00:04:18.767 }, 00:04:18.767 "block_size": 512, 00:04:18.767 "claimed": false, 00:04:18.767 "driver_specific": { 00:04:18.767 "passthru": { 00:04:18.767 "base_bdev_name": "Malloc0", 00:04:18.767 "name": "Passthru0" 00:04:18.767 } 00:04:18.767 }, 00:04:18.767 "memory_domains": [ 00:04:18.767 { 00:04:18.767 "dma_device_id": "system", 00:04:18.767 "dma_device_type": 1 00:04:18.767 }, 00:04:18.767 { 00:04:18.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:18.767 "dma_device_type": 2 00:04:18.767 } 00:04:18.767 ], 00:04:18.767 "name": "Passthru0", 00:04:18.767 "num_blocks": 16384, 00:04:18.767 "product_name": "passthru", 00:04:18.767 "supported_io_types": { 00:04:18.767 "abort": true, 00:04:18.767 "compare": false, 00:04:18.767 "compare_and_write": false, 00:04:18.767 "copy": true, 00:04:18.767 "flush": true, 00:04:18.767 "get_zone_info": false, 00:04:18.767 "nvme_admin": false, 00:04:18.767 "nvme_io": false, 00:04:18.767 "nvme_io_md": false, 00:04:18.767 "nvme_iov_md": false, 00:04:18.767 "read": true, 00:04:18.767 "reset": true, 00:04:18.767 "seek_data": false, 00:04:18.767 "seek_hole": false, 00:04:18.767 "unmap": true, 00:04:18.767 "write": true, 00:04:18.767 "write_zeroes": true, 00:04:18.767 "zcopy": true, 00:04:18.767 "zone_append": false, 00:04:18.767 "zone_management": false 00:04:18.767 }, 00:04:18.767 "uuid": "32b01ba8-2dec-52f7-8927-cb83c8cc9516", 00:04:18.767 "zoned": false 00:04:18.767 } 00:04:18.767 ]' 00:04:18.767 15:50:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:18.767 15:50:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:18.767 15:50:12 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:18.767 15:50:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.767 15:50:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.767 15:50:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.767 15:50:12 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:18.767 15:50:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.767 15:50:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.767 15:50:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.767 15:50:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:18.767 15:50:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.767 15:50:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.767 15:50:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.767 15:50:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:18.767 15:50:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:19.026 15:50:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:19.026 00:04:19.026 real 0m0.345s 00:04:19.026 user 0m0.237s 00:04:19.026 sys 0m0.035s 00:04:19.026 15:50:12 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:19.026 15:50:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.026 ************************************ 00:04:19.026 END TEST rpc_integrity 00:04:19.026 ************************************ 00:04:19.026 15:50:12 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:19.026 15:50:12 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:19.026 15:50:12 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:19.026 15:50:12 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.026 15:50:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.026 ************************************ 00:04:19.026 START TEST rpc_plugins 00:04:19.026 ************************************ 00:04:19.026 15:50:12 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:19.026 15:50:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:19.026 15:50:12 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:19.026 15:50:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:19.026 15:50:12 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:19.026 15:50:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:19.026 15:50:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:19.026 15:50:12 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:19.026 15:50:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:19.026 15:50:12 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:19.026 15:50:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:19.026 { 00:04:19.026 "aliases": [ 00:04:19.026 "30f125ce-94ba-4a42-bf40-c5e11e1450f6" 00:04:19.026 ], 00:04:19.026 "assigned_rate_limits": { 00:04:19.026 "r_mbytes_per_sec": 0, 00:04:19.026 "rw_ios_per_sec": 0, 00:04:19.026 "rw_mbytes_per_sec": 0, 00:04:19.026 "w_mbytes_per_sec": 0 00:04:19.026 }, 00:04:19.026 "block_size": 4096, 00:04:19.026 "claimed": false, 00:04:19.026 "driver_specific": {}, 00:04:19.026 "memory_domains": [ 00:04:19.026 { 00:04:19.026 "dma_device_id": "system", 00:04:19.026 "dma_device_type": 1 00:04:19.026 }, 00:04:19.026 { 00:04:19.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.026 "dma_device_type": 2 00:04:19.026 } 00:04:19.026 ], 00:04:19.026 "name": "Malloc1", 00:04:19.026 "num_blocks": 256, 00:04:19.026 "product_name": "Malloc disk", 00:04:19.026 "supported_io_types": { 00:04:19.026 "abort": true, 00:04:19.026 "compare": false, 00:04:19.026 "compare_and_write": false, 00:04:19.026 "copy": true, 00:04:19.026 "flush": true, 00:04:19.026 "get_zone_info": false, 00:04:19.026 "nvme_admin": false, 00:04:19.026 "nvme_io": false, 00:04:19.026 "nvme_io_md": false, 00:04:19.026 "nvme_iov_md": false, 00:04:19.026 "read": true, 00:04:19.026 "reset": true, 00:04:19.026 "seek_data": false, 00:04:19.026 "seek_hole": false, 00:04:19.026 "unmap": true, 00:04:19.026 "write": true, 00:04:19.026 "write_zeroes": true, 00:04:19.026 "zcopy": true, 00:04:19.026 "zone_append": false, 00:04:19.026 "zone_management": false 00:04:19.026 }, 00:04:19.026 "uuid": "30f125ce-94ba-4a42-bf40-c5e11e1450f6", 00:04:19.026 "zoned": false 00:04:19.026 } 00:04:19.026 ]' 00:04:19.026 15:50:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:19.026 15:50:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:19.026 15:50:12 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:19.026 15:50:12 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:19.026 15:50:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:19.026 15:50:12 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:19.026 15:50:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:19.026 15:50:12 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:19.026 15:50:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:19.026 15:50:12 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:19.026 15:50:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:19.026 15:50:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:19.026 15:50:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:19.026 00:04:19.026 real 0m0.171s 00:04:19.026 user 0m0.123s 00:04:19.026 sys 0m0.011s 00:04:19.026 ************************************ 00:04:19.026 END TEST rpc_plugins 00:04:19.026 ************************************ 00:04:19.026 15:50:12 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:19.026 15:50:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:19.285 15:50:12 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:19.285 15:50:12 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:19.285 15:50:12 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:19.285 15:50:12 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.285 15:50:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.285 ************************************ 00:04:19.285 START TEST rpc_trace_cmd_test 00:04:19.285 ************************************ 00:04:19.285 15:50:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:19.285 15:50:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:19.285 15:50:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:19.285 15:50:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:19.285 15:50:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:19.285 15:50:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:19.285 15:50:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:19.285 "bdev": { 00:04:19.285 "mask": "0x8", 00:04:19.286 "tpoint_mask": "0xffffffffffffffff" 00:04:19.286 }, 00:04:19.286 "bdev_nvme": { 00:04:19.286 "mask": "0x4000", 00:04:19.286 "tpoint_mask": "0x0" 00:04:19.286 }, 00:04:19.286 "blobfs": { 00:04:19.286 "mask": "0x80", 00:04:19.286 "tpoint_mask": "0x0" 00:04:19.286 }, 00:04:19.286 "dsa": { 00:04:19.286 "mask": "0x200", 00:04:19.286 "tpoint_mask": "0x0" 00:04:19.286 }, 00:04:19.286 "ftl": { 00:04:19.286 "mask": "0x40", 00:04:19.286 "tpoint_mask": "0x0" 00:04:19.286 }, 00:04:19.286 "iaa": { 00:04:19.286 "mask": "0x1000", 00:04:19.286 "tpoint_mask": "0x0" 00:04:19.286 }, 00:04:19.286 "iscsi_conn": { 00:04:19.286 "mask": "0x2", 00:04:19.286 "tpoint_mask": "0x0" 00:04:19.286 }, 00:04:19.286 "nvme_pcie": { 00:04:19.286 "mask": "0x800", 00:04:19.286 "tpoint_mask": "0x0" 00:04:19.286 }, 00:04:19.286 "nvme_tcp": { 00:04:19.286 "mask": "0x2000", 00:04:19.286 "tpoint_mask": "0x0" 00:04:19.286 }, 00:04:19.286 "nvmf_rdma": { 00:04:19.286 "mask": "0x10", 00:04:19.286 "tpoint_mask": "0x0" 00:04:19.286 }, 00:04:19.286 "nvmf_tcp": { 00:04:19.286 "mask": "0x20", 00:04:19.286 "tpoint_mask": "0x0" 00:04:19.286 }, 00:04:19.286 "scsi": { 00:04:19.286 "mask": "0x4", 00:04:19.286 "tpoint_mask": "0x0" 00:04:19.286 }, 00:04:19.286 "sock": { 00:04:19.286 "mask": "0x8000", 00:04:19.286 "tpoint_mask": "0x0" 00:04:19.286 }, 00:04:19.286 "thread": { 00:04:19.286 "mask": "0x400", 00:04:19.286 "tpoint_mask": "0x0" 00:04:19.286 }, 00:04:19.286 "tpoint_group_mask": "0x8", 00:04:19.286 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid60657" 00:04:19.286 }' 00:04:19.286 15:50:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:19.286 15:50:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:19.286 15:50:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:19.286 15:50:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:19.286 15:50:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:19.286 15:50:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:19.286 15:50:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:19.286 15:50:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:19.286 15:50:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:19.544 15:50:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:19.544 00:04:19.544 real 0m0.254s 00:04:19.544 user 0m0.223s 00:04:19.544 sys 0m0.023s 00:04:19.544 15:50:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:19.544 15:50:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:19.544 ************************************ 00:04:19.544 END TEST rpc_trace_cmd_test 00:04:19.544 ************************************ 00:04:19.544 15:50:13 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:19.544 15:50:13 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:04:19.544 15:50:13 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:04:19.544 15:50:13 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:19.544 15:50:13 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.544 15:50:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.544 ************************************ 00:04:19.544 START TEST go_rpc 00:04:19.544 ************************************ 00:04:19.544 15:50:13 rpc.go_rpc -- common/autotest_common.sh@1123 -- # go_rpc 00:04:19.544 15:50:13 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:19.544 15:50:13 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:04:19.544 15:50:13 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:04:19.544 15:50:13 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:04:19.544 15:50:13 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:04:19.544 15:50:13 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:19.544 15:50:13 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.544 15:50:13 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:19.544 15:50:13 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:04:19.544 15:50:13 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:19.544 15:50:13 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["ed4dcc8c-94dd-408a-91c0-d6687d1b402d"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"ed4dcc8c-94dd-408a-91c0-d6687d1b402d","zoned":false}]' 00:04:19.544 15:50:13 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:04:19.544 15:50:13 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:04:19.544 15:50:13 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:19.544 15:50:13 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:19.544 15:50:13 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.544 15:50:13 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:19.544 15:50:13 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:19.544 15:50:13 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:04:19.544 15:50:13 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:04:19.802 15:50:13 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:04:19.802 00:04:19.802 real 0m0.214s 00:04:19.802 user 0m0.144s 00:04:19.802 sys 0m0.037s 00:04:19.802 15:50:13 rpc.go_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:19.802 15:50:13 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.802 ************************************ 00:04:19.802 END TEST go_rpc 00:04:19.802 ************************************ 00:04:19.802 15:50:13 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:19.802 15:50:13 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:19.802 15:50:13 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:19.802 15:50:13 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:19.802 15:50:13 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.802 15:50:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.802 ************************************ 00:04:19.802 START TEST rpc_daemon_integrity 00:04:19.802 ************************************ 00:04:19.802 15:50:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:19.802 15:50:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:19.802 15:50:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:19.802 15:50:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.802 15:50:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:19.802 15:50:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:19.802 15:50:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:19.802 15:50:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:19.802 15:50:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:19.802 15:50:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:19.802 15:50:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.802 15:50:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:19.802 15:50:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:04:19.802 15:50:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:19.802 15:50:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:19.802 15:50:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.802 15:50:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:19.802 15:50:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:19.802 { 00:04:19.802 "aliases": [ 00:04:19.802 "72cb0166-b4e2-4198-ac0e-c15318a15ab1" 00:04:19.802 ], 00:04:19.802 "assigned_rate_limits": { 00:04:19.802 "r_mbytes_per_sec": 0, 00:04:19.802 "rw_ios_per_sec": 0, 00:04:19.802 "rw_mbytes_per_sec": 0, 00:04:19.802 "w_mbytes_per_sec": 0 00:04:19.802 }, 00:04:19.802 "block_size": 512, 00:04:19.802 "claimed": false, 00:04:19.802 "driver_specific": {}, 00:04:19.802 "memory_domains": [ 00:04:19.802 { 00:04:19.802 "dma_device_id": "system", 00:04:19.802 "dma_device_type": 1 00:04:19.802 }, 00:04:19.802 { 00:04:19.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.802 "dma_device_type": 2 00:04:19.802 } 00:04:19.802 ], 00:04:19.802 "name": "Malloc3", 00:04:19.802 "num_blocks": 16384, 00:04:19.802 "product_name": "Malloc disk", 00:04:19.802 "supported_io_types": { 00:04:19.802 "abort": true, 00:04:19.802 "compare": false, 00:04:19.802 "compare_and_write": false, 00:04:19.802 "copy": true, 00:04:19.802 "flush": true, 00:04:19.802 "get_zone_info": false, 00:04:19.802 "nvme_admin": false, 00:04:19.802 "nvme_io": false, 00:04:19.802 "nvme_io_md": false, 00:04:19.802 "nvme_iov_md": false, 00:04:19.802 "read": true, 00:04:19.802 "reset": true, 00:04:19.802 "seek_data": false, 00:04:19.802 "seek_hole": false, 00:04:19.802 "unmap": true, 00:04:19.802 "write": true, 00:04:19.802 "write_zeroes": true, 00:04:19.802 "zcopy": true, 00:04:19.802 "zone_append": false, 00:04:19.802 "zone_management": false 00:04:19.802 }, 00:04:19.802 "uuid": "72cb0166-b4e2-4198-ac0e-c15318a15ab1", 00:04:19.802 "zoned": false 00:04:19.802 } 00:04:19.802 ]' 00:04:19.802 15:50:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:19.802 15:50:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:19.802 15:50:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:04:19.802 15:50:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:19.802 15:50:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.802 [2024-07-15 15:50:13.485989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:19.802 [2024-07-15 15:50:13.486052] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:19.802 [2024-07-15 15:50:13.486076] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1eb6b80 00:04:19.802 [2024-07-15 15:50:13.486089] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:19.802 [2024-07-15 15:50:13.487758] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:19.802 [2024-07-15 15:50:13.487801] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:19.802 Passthru0 00:04:19.802 15:50:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:19.802 15:50:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:19.802 15:50:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:19.802 15:50:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.802 15:50:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:19.802 15:50:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:19.802 { 00:04:19.802 "aliases": [ 00:04:19.802 "72cb0166-b4e2-4198-ac0e-c15318a15ab1" 00:04:19.802 ], 00:04:19.802 "assigned_rate_limits": { 00:04:19.802 "r_mbytes_per_sec": 0, 00:04:19.802 "rw_ios_per_sec": 0, 00:04:19.802 "rw_mbytes_per_sec": 0, 00:04:19.802 "w_mbytes_per_sec": 0 00:04:19.802 }, 00:04:19.802 "block_size": 512, 00:04:19.802 "claim_type": "exclusive_write", 00:04:19.802 "claimed": true, 00:04:19.802 "driver_specific": {}, 00:04:19.802 "memory_domains": [ 00:04:19.802 { 00:04:19.802 "dma_device_id": "system", 00:04:19.802 "dma_device_type": 1 00:04:19.802 }, 00:04:19.802 { 00:04:19.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.802 "dma_device_type": 2 00:04:19.802 } 00:04:19.802 ], 00:04:19.802 "name": "Malloc3", 00:04:19.802 "num_blocks": 16384, 00:04:19.802 "product_name": "Malloc disk", 00:04:19.802 "supported_io_types": { 00:04:19.802 "abort": true, 00:04:19.802 "compare": false, 00:04:19.802 "compare_and_write": false, 00:04:19.802 "copy": true, 00:04:19.802 "flush": true, 00:04:19.802 "get_zone_info": false, 00:04:19.802 "nvme_admin": false, 00:04:19.802 "nvme_io": false, 00:04:19.802 "nvme_io_md": false, 00:04:19.802 "nvme_iov_md": false, 00:04:19.802 "read": true, 00:04:19.802 "reset": true, 00:04:19.802 "seek_data": false, 00:04:19.802 "seek_hole": false, 00:04:19.802 "unmap": true, 00:04:19.802 "write": true, 00:04:19.802 "write_zeroes": true, 00:04:19.802 "zcopy": true, 00:04:19.802 "zone_append": false, 00:04:19.802 "zone_management": false 00:04:19.802 }, 00:04:19.802 "uuid": "72cb0166-b4e2-4198-ac0e-c15318a15ab1", 00:04:19.802 "zoned": false 00:04:19.802 }, 00:04:19.802 { 00:04:19.802 "aliases": [ 00:04:19.802 "e8be4663-5ff5-540e-bf8b-ee2b3eb80894" 00:04:19.802 ], 00:04:19.802 "assigned_rate_limits": { 00:04:19.802 "r_mbytes_per_sec": 0, 00:04:19.802 "rw_ios_per_sec": 0, 00:04:19.802 "rw_mbytes_per_sec": 0, 00:04:19.802 "w_mbytes_per_sec": 0 00:04:19.802 }, 00:04:19.802 "block_size": 512, 00:04:19.802 "claimed": false, 00:04:19.802 "driver_specific": { 00:04:19.802 "passthru": { 00:04:19.802 "base_bdev_name": "Malloc3", 00:04:19.802 "name": "Passthru0" 00:04:19.802 } 00:04:19.802 }, 00:04:19.802 "memory_domains": [ 00:04:19.802 { 00:04:19.802 "dma_device_id": "system", 00:04:19.802 "dma_device_type": 1 00:04:19.802 }, 00:04:19.802 { 00:04:19.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.802 "dma_device_type": 2 00:04:19.802 } 00:04:19.802 ], 00:04:19.803 "name": "Passthru0", 00:04:19.803 "num_blocks": 16384, 00:04:19.803 "product_name": "passthru", 00:04:19.803 "supported_io_types": { 00:04:19.803 "abort": true, 00:04:19.803 "compare": false, 00:04:19.803 "compare_and_write": false, 00:04:19.803 "copy": true, 00:04:19.803 "flush": true, 00:04:19.803 "get_zone_info": false, 00:04:19.803 "nvme_admin": false, 00:04:19.803 "nvme_io": false, 00:04:19.803 "nvme_io_md": false, 00:04:19.803 "nvme_iov_md": false, 00:04:19.803 "read": true, 00:04:19.803 "reset": true, 00:04:19.803 "seek_data": false, 00:04:19.803 "seek_hole": false, 00:04:19.803 "unmap": true, 00:04:19.803 "write": true, 00:04:19.803 "write_zeroes": true, 00:04:19.803 "zcopy": true, 00:04:19.803 "zone_append": false, 00:04:19.803 "zone_management": false 00:04:19.803 }, 00:04:19.803 "uuid": "e8be4663-5ff5-540e-bf8b-ee2b3eb80894", 00:04:19.803 "zoned": false 00:04:19.803 } 00:04:19.803 ]' 00:04:19.803 15:50:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:20.061 15:50:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:20.061 15:50:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:20.061 15:50:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.061 15:50:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.061 15:50:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.061 15:50:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:04:20.061 15:50:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.061 15:50:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.061 15:50:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.061 15:50:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:20.061 15:50:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.061 15:50:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.061 15:50:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.061 15:50:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:20.061 15:50:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:20.061 15:50:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:20.061 00:04:20.061 real 0m0.308s 00:04:20.061 user 0m0.193s 00:04:20.061 sys 0m0.052s 00:04:20.061 ************************************ 00:04:20.061 END TEST rpc_daemon_integrity 00:04:20.061 ************************************ 00:04:20.061 15:50:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.061 15:50:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.061 15:50:13 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:20.061 15:50:13 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:20.061 15:50:13 rpc -- rpc/rpc.sh@84 -- # killprocess 60657 00:04:20.061 15:50:13 rpc -- common/autotest_common.sh@948 -- # '[' -z 60657 ']' 00:04:20.061 15:50:13 rpc -- common/autotest_common.sh@952 -- # kill -0 60657 00:04:20.061 15:50:13 rpc -- common/autotest_common.sh@953 -- # uname 00:04:20.061 15:50:13 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:20.061 15:50:13 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60657 00:04:20.061 15:50:13 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:20.061 killing process with pid 60657 00:04:20.061 15:50:13 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:20.061 15:50:13 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60657' 00:04:20.061 15:50:13 rpc -- common/autotest_common.sh@967 -- # kill 60657 00:04:20.061 15:50:13 rpc -- common/autotest_common.sh@972 -- # wait 60657 00:04:20.996 00:04:20.996 real 0m3.398s 00:04:20.996 user 0m4.392s 00:04:20.996 sys 0m0.795s 00:04:20.996 15:50:14 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.996 15:50:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.996 ************************************ 00:04:20.996 END TEST rpc 00:04:20.996 ************************************ 00:04:20.996 15:50:14 -- common/autotest_common.sh@1142 -- # return 0 00:04:20.997 15:50:14 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:20.997 15:50:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.997 15:50:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.997 15:50:14 -- common/autotest_common.sh@10 -- # set +x 00:04:20.997 ************************************ 00:04:20.997 START TEST skip_rpc 00:04:20.997 ************************************ 00:04:20.997 15:50:14 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:20.997 * Looking for test storage... 00:04:20.997 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:20.997 15:50:14 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:20.997 15:50:14 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:20.997 15:50:14 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:20.997 15:50:14 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.997 15:50:14 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.997 15:50:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.997 ************************************ 00:04:20.997 START TEST skip_rpc 00:04:20.997 ************************************ 00:04:20.997 15:50:14 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:20.997 15:50:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=60918 00:04:20.997 15:50:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:20.997 15:50:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:20.997 15:50:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:20.997 [2024-07-15 15:50:14.608334] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:04:20.997 [2024-07-15 15:50:14.608503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60918 ] 00:04:21.255 [2024-07-15 15:50:14.746336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.255 [2024-07-15 15:50:14.912298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.521 15:50:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:26.521 15:50:19 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:26.521 15:50:19 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:26.521 15:50:19 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:26.521 15:50:19 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:26.521 15:50:19 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:26.521 15:50:19 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:26.521 15:50:19 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:26.521 15:50:19 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.521 15:50:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.521 2024/07/15 15:50:19 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:04:26.521 15:50:19 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:26.521 15:50:19 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:26.521 15:50:19 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:26.521 15:50:19 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:26.521 15:50:19 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:26.521 15:50:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:26.521 15:50:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 60918 00:04:26.521 15:50:19 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 60918 ']' 00:04:26.521 15:50:19 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 60918 00:04:26.521 15:50:19 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:26.521 15:50:19 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:26.521 15:50:19 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60918 00:04:26.521 15:50:19 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:26.521 killing process with pid 60918 00:04:26.521 15:50:19 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:26.521 15:50:19 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60918' 00:04:26.521 15:50:19 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 60918 00:04:26.521 15:50:19 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 60918 00:04:26.521 00:04:26.521 real 0m5.702s 00:04:26.521 user 0m5.179s 00:04:26.521 sys 0m0.423s 00:04:26.521 ************************************ 00:04:26.521 15:50:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.521 15:50:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.521 END TEST skip_rpc 00:04:26.521 ************************************ 00:04:26.779 15:50:20 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:26.779 15:50:20 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:26.779 15:50:20 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.779 15:50:20 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.779 15:50:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.779 ************************************ 00:04:26.779 START TEST skip_rpc_with_json 00:04:26.779 ************************************ 00:04:26.779 15:50:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:26.779 15:50:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:26.779 15:50:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=61011 00:04:26.779 15:50:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:26.779 15:50:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:26.779 15:50:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 61011 00:04:26.779 15:50:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 61011 ']' 00:04:26.779 15:50:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.779 15:50:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:26.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.779 15:50:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.779 15:50:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:26.779 15:50:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:26.779 [2024-07-15 15:50:20.342241] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:04:26.779 [2024-07-15 15:50:20.342340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61011 ] 00:04:26.779 [2024-07-15 15:50:20.478710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.038 [2024-07-15 15:50:20.642951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.604 15:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:27.604 15:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:27.604 15:50:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:27.604 15:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:27.604 15:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:27.604 [2024-07-15 15:50:21.329041] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:27.863 2024/07/15 15:50:21 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:04:27.863 request: 00:04:27.863 { 00:04:27.863 "method": "nvmf_get_transports", 00:04:27.863 "params": { 00:04:27.863 "trtype": "tcp" 00:04:27.863 } 00:04:27.863 } 00:04:27.863 Got JSON-RPC error response 00:04:27.863 GoRPCClient: error on JSON-RPC call 00:04:27.863 15:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:27.863 15:50:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:27.863 15:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:27.863 15:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:27.863 [2024-07-15 15:50:21.341174] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:27.863 15:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:27.863 15:50:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:27.863 15:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:27.863 15:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:27.863 15:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:27.863 15:50:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:27.863 { 00:04:27.863 "subsystems": [ 00:04:27.863 { 00:04:27.863 "subsystem": "keyring", 00:04:27.863 "config": [] 00:04:27.863 }, 00:04:27.863 { 00:04:27.863 "subsystem": "iobuf", 00:04:27.863 "config": [ 00:04:27.863 { 00:04:27.863 "method": "iobuf_set_options", 00:04:27.863 "params": { 00:04:27.863 "large_bufsize": 135168, 00:04:27.863 "large_pool_count": 1024, 00:04:27.863 "small_bufsize": 8192, 00:04:27.863 "small_pool_count": 8192 00:04:27.863 } 00:04:27.863 } 00:04:27.863 ] 00:04:27.863 }, 00:04:27.863 { 00:04:27.863 "subsystem": "sock", 00:04:27.863 "config": [ 00:04:27.863 { 00:04:27.863 "method": "sock_set_default_impl", 00:04:27.863 "params": { 00:04:27.863 "impl_name": "posix" 00:04:27.863 } 00:04:27.863 }, 00:04:27.863 { 00:04:27.863 "method": "sock_impl_set_options", 00:04:27.863 "params": { 00:04:27.863 "enable_ktls": false, 00:04:27.863 "enable_placement_id": 0, 00:04:27.863 "enable_quickack": false, 00:04:27.863 "enable_recv_pipe": true, 00:04:27.863 "enable_zerocopy_send_client": false, 00:04:27.863 "enable_zerocopy_send_server": true, 00:04:27.863 "impl_name": "ssl", 00:04:27.863 "recv_buf_size": 4096, 00:04:27.863 "send_buf_size": 4096, 00:04:27.863 "tls_version": 0, 00:04:27.863 "zerocopy_threshold": 0 00:04:27.863 } 00:04:27.863 }, 00:04:27.863 { 00:04:27.863 "method": "sock_impl_set_options", 00:04:27.863 "params": { 00:04:27.863 "enable_ktls": false, 00:04:27.863 "enable_placement_id": 0, 00:04:27.863 "enable_quickack": false, 00:04:27.863 "enable_recv_pipe": true, 00:04:27.863 "enable_zerocopy_send_client": false, 00:04:27.863 "enable_zerocopy_send_server": true, 00:04:27.863 "impl_name": "posix", 00:04:27.863 "recv_buf_size": 2097152, 00:04:27.863 "send_buf_size": 2097152, 00:04:27.863 "tls_version": 0, 00:04:27.863 "zerocopy_threshold": 0 00:04:27.863 } 00:04:27.863 } 00:04:27.863 ] 00:04:27.863 }, 00:04:27.863 { 00:04:27.863 "subsystem": "vmd", 00:04:27.863 "config": [] 00:04:27.863 }, 00:04:27.863 { 00:04:27.863 "subsystem": "accel", 00:04:27.863 "config": [ 00:04:27.863 { 00:04:27.863 "method": "accel_set_options", 00:04:27.863 "params": { 00:04:27.863 "buf_count": 2048, 00:04:27.863 "large_cache_size": 16, 00:04:27.863 "sequence_count": 2048, 00:04:27.863 "small_cache_size": 128, 00:04:27.863 "task_count": 2048 00:04:27.863 } 00:04:27.863 } 00:04:27.863 ] 00:04:27.863 }, 00:04:27.863 { 00:04:27.863 "subsystem": "bdev", 00:04:27.863 "config": [ 00:04:27.863 { 00:04:27.863 "method": "bdev_set_options", 00:04:27.863 "params": { 00:04:27.863 "bdev_auto_examine": true, 00:04:27.863 "bdev_io_cache_size": 256, 00:04:27.863 "bdev_io_pool_size": 65535, 00:04:27.863 "iobuf_large_cache_size": 16, 00:04:27.863 "iobuf_small_cache_size": 128 00:04:27.863 } 00:04:27.863 }, 00:04:27.863 { 00:04:27.863 "method": "bdev_raid_set_options", 00:04:27.863 "params": { 00:04:27.863 "process_window_size_kb": 1024 00:04:27.863 } 00:04:27.863 }, 00:04:27.863 { 00:04:27.863 "method": "bdev_iscsi_set_options", 00:04:27.863 "params": { 00:04:27.863 "timeout_sec": 30 00:04:27.863 } 00:04:27.863 }, 00:04:27.863 { 00:04:27.863 "method": "bdev_nvme_set_options", 00:04:27.863 "params": { 00:04:27.863 "action_on_timeout": "none", 00:04:27.863 "allow_accel_sequence": false, 00:04:27.863 "arbitration_burst": 0, 00:04:27.863 "bdev_retry_count": 3, 00:04:27.864 "ctrlr_loss_timeout_sec": 0, 00:04:27.864 "delay_cmd_submit": true, 00:04:27.864 "dhchap_dhgroups": [ 00:04:27.864 "null", 00:04:27.864 "ffdhe2048", 00:04:27.864 "ffdhe3072", 00:04:27.864 "ffdhe4096", 00:04:27.864 "ffdhe6144", 00:04:27.864 "ffdhe8192" 00:04:27.864 ], 00:04:27.864 "dhchap_digests": [ 00:04:27.864 "sha256", 00:04:27.864 "sha384", 00:04:27.864 "sha512" 00:04:27.864 ], 00:04:27.864 "disable_auto_failback": false, 00:04:27.864 "fast_io_fail_timeout_sec": 0, 00:04:27.864 "generate_uuids": false, 00:04:27.864 "high_priority_weight": 0, 00:04:27.864 "io_path_stat": false, 00:04:27.864 "io_queue_requests": 0, 00:04:27.864 "keep_alive_timeout_ms": 10000, 00:04:27.864 "low_priority_weight": 0, 00:04:27.864 "medium_priority_weight": 0, 00:04:27.864 "nvme_adminq_poll_period_us": 10000, 00:04:27.864 "nvme_error_stat": false, 00:04:27.864 "nvme_ioq_poll_period_us": 0, 00:04:27.864 "rdma_cm_event_timeout_ms": 0, 00:04:27.864 "rdma_max_cq_size": 0, 00:04:27.864 "rdma_srq_size": 0, 00:04:27.864 "reconnect_delay_sec": 0, 00:04:27.864 "timeout_admin_us": 0, 00:04:27.864 "timeout_us": 0, 00:04:27.864 "transport_ack_timeout": 0, 00:04:27.864 "transport_retry_count": 4, 00:04:27.864 "transport_tos": 0 00:04:27.864 } 00:04:27.864 }, 00:04:27.864 { 00:04:27.864 "method": "bdev_nvme_set_hotplug", 00:04:27.864 "params": { 00:04:27.864 "enable": false, 00:04:27.864 "period_us": 100000 00:04:27.864 } 00:04:27.864 }, 00:04:27.864 { 00:04:27.864 "method": "bdev_wait_for_examine" 00:04:27.864 } 00:04:27.864 ] 00:04:27.864 }, 00:04:27.864 { 00:04:27.864 "subsystem": "scsi", 00:04:27.864 "config": null 00:04:27.864 }, 00:04:27.864 { 00:04:27.864 "subsystem": "scheduler", 00:04:27.864 "config": [ 00:04:27.864 { 00:04:27.864 "method": "framework_set_scheduler", 00:04:27.864 "params": { 00:04:27.864 "name": "static" 00:04:27.864 } 00:04:27.864 } 00:04:27.864 ] 00:04:27.864 }, 00:04:27.864 { 00:04:27.864 "subsystem": "vhost_scsi", 00:04:27.864 "config": [] 00:04:27.864 }, 00:04:27.864 { 00:04:27.864 "subsystem": "vhost_blk", 00:04:27.864 "config": [] 00:04:27.864 }, 00:04:27.864 { 00:04:27.864 "subsystem": "ublk", 00:04:27.864 "config": [] 00:04:27.864 }, 00:04:27.864 { 00:04:27.864 "subsystem": "nbd", 00:04:27.864 "config": [] 00:04:27.864 }, 00:04:27.864 { 00:04:27.864 "subsystem": "nvmf", 00:04:27.864 "config": [ 00:04:27.864 { 00:04:27.864 "method": "nvmf_set_config", 00:04:27.864 "params": { 00:04:27.864 "admin_cmd_passthru": { 00:04:27.864 "identify_ctrlr": false 00:04:27.864 }, 00:04:27.864 "discovery_filter": "match_any" 00:04:27.864 } 00:04:27.864 }, 00:04:27.864 { 00:04:27.864 "method": "nvmf_set_max_subsystems", 00:04:27.864 "params": { 00:04:27.864 "max_subsystems": 1024 00:04:27.864 } 00:04:27.864 }, 00:04:27.864 { 00:04:27.864 "method": "nvmf_set_crdt", 00:04:27.864 "params": { 00:04:27.864 "crdt1": 0, 00:04:27.864 "crdt2": 0, 00:04:27.864 "crdt3": 0 00:04:27.864 } 00:04:27.864 }, 00:04:27.864 { 00:04:27.864 "method": "nvmf_create_transport", 00:04:27.864 "params": { 00:04:27.864 "abort_timeout_sec": 1, 00:04:27.864 "ack_timeout": 0, 00:04:27.864 "buf_cache_size": 4294967295, 00:04:27.864 "c2h_success": true, 00:04:27.864 "data_wr_pool_size": 0, 00:04:27.864 "dif_insert_or_strip": false, 00:04:27.864 "in_capsule_data_size": 4096, 00:04:27.864 "io_unit_size": 131072, 00:04:27.864 "max_aq_depth": 128, 00:04:27.864 "max_io_qpairs_per_ctrlr": 127, 00:04:27.864 "max_io_size": 131072, 00:04:27.864 "max_queue_depth": 128, 00:04:27.864 "num_shared_buffers": 511, 00:04:27.864 "sock_priority": 0, 00:04:27.864 "trtype": "TCP", 00:04:27.864 "zcopy": false 00:04:27.864 } 00:04:27.864 } 00:04:27.864 ] 00:04:27.864 }, 00:04:27.864 { 00:04:27.864 "subsystem": "iscsi", 00:04:27.864 "config": [ 00:04:27.864 { 00:04:27.864 "method": "iscsi_set_options", 00:04:27.864 "params": { 00:04:27.864 "allow_duplicated_isid": false, 00:04:27.864 "chap_group": 0, 00:04:27.864 "data_out_pool_size": 2048, 00:04:27.864 "default_time2retain": 20, 00:04:27.864 "default_time2wait": 2, 00:04:27.864 "disable_chap": false, 00:04:27.864 "error_recovery_level": 0, 00:04:27.864 "first_burst_length": 8192, 00:04:27.864 "immediate_data": true, 00:04:27.864 "immediate_data_pool_size": 16384, 00:04:27.864 "max_connections_per_session": 2, 00:04:27.864 "max_large_datain_per_connection": 64, 00:04:27.864 "max_queue_depth": 64, 00:04:27.864 "max_r2t_per_connection": 4, 00:04:27.864 "max_sessions": 128, 00:04:27.864 "mutual_chap": false, 00:04:27.864 "node_base": "iqn.2016-06.io.spdk", 00:04:27.864 "nop_in_interval": 30, 00:04:27.864 "nop_timeout": 60, 00:04:27.864 "pdu_pool_size": 36864, 00:04:27.864 "require_chap": false 00:04:27.864 } 00:04:27.864 } 00:04:27.864 ] 00:04:27.864 } 00:04:27.864 ] 00:04:27.864 } 00:04:27.864 15:50:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:27.864 15:50:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 61011 00:04:27.864 15:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 61011 ']' 00:04:27.864 15:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 61011 00:04:27.864 15:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:27.864 15:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:27.864 15:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61011 00:04:27.864 15:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:27.864 15:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:27.864 killing process with pid 61011 00:04:27.864 15:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61011' 00:04:27.864 15:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 61011 00:04:27.864 15:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 61011 00:04:28.800 15:50:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=61050 00:04:28.800 15:50:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:28.800 15:50:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:34.057 15:50:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 61050 00:04:34.057 15:50:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 61050 ']' 00:04:34.057 15:50:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 61050 00:04:34.057 15:50:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:34.057 15:50:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:34.057 15:50:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61050 00:04:34.057 15:50:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:34.057 killing process with pid 61050 00:04:34.057 15:50:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:34.057 15:50:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61050' 00:04:34.057 15:50:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 61050 00:04:34.057 15:50:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 61050 00:04:34.320 15:50:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:34.320 15:50:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:34.320 00:04:34.320 real 0m7.607s 00:04:34.320 user 0m7.024s 00:04:34.320 sys 0m0.940s 00:04:34.320 15:50:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.320 15:50:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.320 ************************************ 00:04:34.320 END TEST skip_rpc_with_json 00:04:34.320 ************************************ 00:04:34.320 15:50:27 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:34.320 15:50:27 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:34.320 15:50:27 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.320 15:50:27 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.320 15:50:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.320 ************************************ 00:04:34.320 START TEST skip_rpc_with_delay 00:04:34.320 ************************************ 00:04:34.320 15:50:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:34.320 15:50:27 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:34.320 15:50:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:34.320 15:50:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:34.320 15:50:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:34.320 15:50:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:34.320 15:50:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:34.320 15:50:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:34.320 15:50:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:34.320 15:50:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:34.320 15:50:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:34.320 15:50:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:34.320 15:50:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:34.320 [2024-07-15 15:50:28.026668] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:34.320 [2024-07-15 15:50:28.027392] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:34.320 15:50:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:34.320 15:50:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:34.320 15:50:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:34.320 15:50:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:34.320 00:04:34.320 real 0m0.106s 00:04:34.320 user 0m0.067s 00:04:34.320 sys 0m0.036s 00:04:34.320 15:50:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.320 ************************************ 00:04:34.320 END TEST skip_rpc_with_delay 00:04:34.320 ************************************ 00:04:34.320 15:50:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:34.579 15:50:28 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:34.579 15:50:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:34.579 15:50:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:34.579 15:50:28 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:34.579 15:50:28 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.579 15:50:28 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.579 15:50:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.579 ************************************ 00:04:34.579 START TEST exit_on_failed_rpc_init 00:04:34.579 ************************************ 00:04:34.579 15:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:34.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.579 15:50:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=61165 00:04:34.579 15:50:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 61165 00:04:34.579 15:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 61165 ']' 00:04:34.579 15:50:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:34.579 15:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.579 15:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:34.579 15:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.579 15:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:34.579 15:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:34.579 [2024-07-15 15:50:28.189852] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:04:34.579 [2024-07-15 15:50:28.190032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61165 ] 00:04:34.837 [2024-07-15 15:50:28.332313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.837 [2024-07-15 15:50:28.496260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.771 15:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:35.771 15:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:35.771 15:50:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:35.771 15:50:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:35.771 15:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:35.771 15:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:35.771 15:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:35.771 15:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:35.771 15:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:35.771 15:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:35.771 15:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:35.771 15:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:35.771 15:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:35.772 15:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:35.772 15:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:35.772 [2024-07-15 15:50:29.300171] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:04:35.772 [2024-07-15 15:50:29.300300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61195 ] 00:04:35.772 [2024-07-15 15:50:29.440895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.048 [2024-07-15 15:50:29.568422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:36.048 [2024-07-15 15:50:29.568541] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:36.048 [2024-07-15 15:50:29.568557] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:36.048 [2024-07-15 15:50:29.568566] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:36.048 15:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:36.048 15:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:36.048 15:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:36.048 15:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:36.048 15:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:36.048 15:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:36.048 15:50:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:36.048 15:50:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 61165 00:04:36.048 15:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 61165 ']' 00:04:36.048 15:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 61165 00:04:36.048 15:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:36.048 15:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:36.048 15:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61165 00:04:36.048 killing process with pid 61165 00:04:36.048 15:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:36.048 15:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:36.048 15:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61165' 00:04:36.048 15:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 61165 00:04:36.048 15:50:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 61165 00:04:36.646 00:04:36.646 real 0m2.011s 00:04:36.646 user 0m2.267s 00:04:36.646 sys 0m0.613s 00:04:36.646 15:50:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.646 ************************************ 00:04:36.646 END TEST exit_on_failed_rpc_init 00:04:36.646 ************************************ 00:04:36.646 15:50:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:36.646 15:50:30 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:36.646 15:50:30 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:36.646 ************************************ 00:04:36.646 END TEST skip_rpc 00:04:36.646 ************************************ 00:04:36.646 00:04:36.646 real 0m15.731s 00:04:36.646 user 0m14.642s 00:04:36.646 sys 0m2.203s 00:04:36.646 15:50:30 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.646 15:50:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.646 15:50:30 -- common/autotest_common.sh@1142 -- # return 0 00:04:36.646 15:50:30 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:36.646 15:50:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.646 15:50:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.646 15:50:30 -- common/autotest_common.sh@10 -- # set +x 00:04:36.646 ************************************ 00:04:36.646 START TEST rpc_client 00:04:36.646 ************************************ 00:04:36.646 15:50:30 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:36.646 * Looking for test storage... 00:04:36.646 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:36.646 15:50:30 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:36.646 OK 00:04:36.646 15:50:30 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:36.646 ************************************ 00:04:36.646 END TEST rpc_client 00:04:36.646 ************************************ 00:04:36.646 00:04:36.646 real 0m0.108s 00:04:36.646 user 0m0.048s 00:04:36.646 sys 0m0.064s 00:04:36.646 15:50:30 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.646 15:50:30 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:36.646 15:50:30 -- common/autotest_common.sh@1142 -- # return 0 00:04:36.646 15:50:30 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:36.646 15:50:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.646 15:50:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.646 15:50:30 -- common/autotest_common.sh@10 -- # set +x 00:04:36.646 ************************************ 00:04:36.646 START TEST json_config 00:04:36.646 ************************************ 00:04:36.646 15:50:30 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:36.905 15:50:30 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:36.905 15:50:30 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:36.905 15:50:30 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:36.905 15:50:30 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:36.905 15:50:30 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:36.905 15:50:30 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:36.905 15:50:30 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:36.905 15:50:30 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:36.905 15:50:30 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:36.905 15:50:30 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:36.905 15:50:30 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:36.905 15:50:30 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:36.906 15:50:30 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:04:36.906 15:50:30 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:04:36.906 15:50:30 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:36.906 15:50:30 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:36.906 15:50:30 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:36.906 15:50:30 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:36.906 15:50:30 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:36.906 15:50:30 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:36.906 15:50:30 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:36.906 15:50:30 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:36.906 15:50:30 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.906 15:50:30 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.906 15:50:30 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.906 15:50:30 json_config -- paths/export.sh@5 -- # export PATH 00:04:36.906 15:50:30 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.906 15:50:30 json_config -- nvmf/common.sh@47 -- # : 0 00:04:36.906 15:50:30 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:36.906 15:50:30 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:36.906 15:50:30 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:36.906 15:50:30 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:36.906 15:50:30 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:36.906 15:50:30 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:36.906 15:50:30 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:36.906 15:50:30 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:36.906 15:50:30 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:36.906 15:50:30 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:36.906 15:50:30 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:36.906 15:50:30 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:36.906 15:50:30 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:36.906 15:50:30 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:36.906 15:50:30 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:36.906 15:50:30 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:36.906 15:50:30 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:36.906 15:50:30 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:36.906 15:50:30 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:36.906 15:50:30 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:36.906 15:50:30 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:36.906 15:50:30 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:36.906 15:50:30 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:36.906 15:50:30 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:36.906 INFO: JSON configuration test init 00:04:36.906 15:50:30 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:36.906 15:50:30 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:36.906 15:50:30 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:36.906 15:50:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.906 15:50:30 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:36.906 15:50:30 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:36.906 15:50:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.906 Waiting for target to run... 00:04:36.906 15:50:30 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:36.906 15:50:30 json_config -- json_config/common.sh@9 -- # local app=target 00:04:36.906 15:50:30 json_config -- json_config/common.sh@10 -- # shift 00:04:36.906 15:50:30 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:36.906 15:50:30 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:36.906 15:50:30 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:36.906 15:50:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:36.906 15:50:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:36.906 15:50:30 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61319 00:04:36.906 15:50:30 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:36.906 15:50:30 json_config -- json_config/common.sh@25 -- # waitforlisten 61319 /var/tmp/spdk_tgt.sock 00:04:36.906 15:50:30 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:36.906 15:50:30 json_config -- common/autotest_common.sh@829 -- # '[' -z 61319 ']' 00:04:36.906 15:50:30 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:36.906 15:50:30 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:36.906 15:50:30 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:36.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:36.906 15:50:30 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:36.906 15:50:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.906 [2024-07-15 15:50:30.532606] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:04:36.906 [2024-07-15 15:50:30.533479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61319 ] 00:04:37.472 [2024-07-15 15:50:30.956619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.472 [2024-07-15 15:50:31.089067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.037 15:50:31 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:38.037 15:50:31 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:38.037 15:50:31 json_config -- json_config/common.sh@26 -- # echo '' 00:04:38.037 00:04:38.037 15:50:31 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:38.037 15:50:31 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:38.037 15:50:31 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:38.037 15:50:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.037 15:50:31 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:38.037 15:50:31 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:38.037 15:50:31 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:38.037 15:50:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.037 15:50:31 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:38.037 15:50:31 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:38.037 15:50:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:38.600 15:50:32 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:38.600 15:50:32 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:38.600 15:50:32 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:38.600 15:50:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.600 15:50:32 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:38.600 15:50:32 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:38.600 15:50:32 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:38.600 15:50:32 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:38.600 15:50:32 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:38.600 15:50:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:38.858 15:50:32 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:38.858 15:50:32 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:38.858 15:50:32 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:38.858 15:50:32 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:38.858 15:50:32 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:38.858 15:50:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.858 15:50:32 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:38.858 15:50:32 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:38.858 15:50:32 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:38.858 15:50:32 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:38.858 15:50:32 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:38.858 15:50:32 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:38.858 15:50:32 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:38.858 15:50:32 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:38.858 15:50:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.858 15:50:32 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:38.858 15:50:32 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:38.858 15:50:32 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:38.858 15:50:32 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:38.858 15:50:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:39.116 MallocForNvmf0 00:04:39.116 15:50:32 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:39.116 15:50:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:39.375 MallocForNvmf1 00:04:39.375 15:50:32 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:39.375 15:50:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:39.633 [2024-07-15 15:50:33.243133] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:39.633 15:50:33 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:39.633 15:50:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:39.889 15:50:33 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:39.889 15:50:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:40.146 15:50:33 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:40.146 15:50:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:40.405 15:50:34 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:40.405 15:50:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:40.663 [2024-07-15 15:50:34.363685] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:40.663 15:50:34 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:40.663 15:50:34 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:40.663 15:50:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.922 15:50:34 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:40.922 15:50:34 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:40.922 15:50:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.922 15:50:34 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:40.922 15:50:34 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:40.922 15:50:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:41.182 MallocBdevForConfigChangeCheck 00:04:41.182 15:50:34 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:41.182 15:50:34 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:41.182 15:50:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.182 15:50:34 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:41.182 15:50:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:41.441 INFO: shutting down applications... 00:04:41.441 15:50:35 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:41.441 15:50:35 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:41.441 15:50:35 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:41.441 15:50:35 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:41.441 15:50:35 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:42.007 Calling clear_iscsi_subsystem 00:04:42.007 Calling clear_nvmf_subsystem 00:04:42.007 Calling clear_nbd_subsystem 00:04:42.007 Calling clear_ublk_subsystem 00:04:42.007 Calling clear_vhost_blk_subsystem 00:04:42.007 Calling clear_vhost_scsi_subsystem 00:04:42.007 Calling clear_bdev_subsystem 00:04:42.007 15:50:35 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:42.007 15:50:35 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:42.007 15:50:35 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:42.007 15:50:35 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:42.007 15:50:35 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:42.007 15:50:35 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:42.265 15:50:35 json_config -- json_config/json_config.sh@345 -- # break 00:04:42.265 15:50:35 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:42.265 15:50:35 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:42.265 15:50:35 json_config -- json_config/common.sh@31 -- # local app=target 00:04:42.265 15:50:35 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:42.265 15:50:35 json_config -- json_config/common.sh@35 -- # [[ -n 61319 ]] 00:04:42.265 15:50:35 json_config -- json_config/common.sh@38 -- # kill -SIGINT 61319 00:04:42.265 15:50:35 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:42.265 15:50:35 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:42.265 15:50:35 json_config -- json_config/common.sh@41 -- # kill -0 61319 00:04:42.265 15:50:35 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:42.833 15:50:36 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:42.833 15:50:36 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:42.833 15:50:36 json_config -- json_config/common.sh@41 -- # kill -0 61319 00:04:42.833 15:50:36 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:42.833 15:50:36 json_config -- json_config/common.sh@43 -- # break 00:04:42.833 15:50:36 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:42.833 SPDK target shutdown done 00:04:42.833 15:50:36 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:42.833 INFO: relaunching applications... 00:04:42.833 15:50:36 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:42.833 15:50:36 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:42.833 15:50:36 json_config -- json_config/common.sh@9 -- # local app=target 00:04:42.833 15:50:36 json_config -- json_config/common.sh@10 -- # shift 00:04:42.833 15:50:36 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:42.833 15:50:36 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:42.833 15:50:36 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:42.833 15:50:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.833 15:50:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.833 15:50:36 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61593 00:04:42.833 Waiting for target to run... 00:04:42.833 15:50:36 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:42.833 15:50:36 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:42.833 15:50:36 json_config -- json_config/common.sh@25 -- # waitforlisten 61593 /var/tmp/spdk_tgt.sock 00:04:42.833 15:50:36 json_config -- common/autotest_common.sh@829 -- # '[' -z 61593 ']' 00:04:42.833 15:50:36 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:42.833 15:50:36 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:42.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:42.833 15:50:36 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:42.833 15:50:36 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:42.833 15:50:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.833 [2024-07-15 15:50:36.494416] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:04:42.833 [2024-07-15 15:50:36.494553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61593 ] 00:04:43.399 [2024-07-15 15:50:36.920855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.399 [2024-07-15 15:50:37.017847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.657 [2024-07-15 15:50:37.347763] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:43.657 [2024-07-15 15:50:37.379852] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:43.915 15:50:37 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:43.915 00:04:43.915 15:50:37 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:43.915 15:50:37 json_config -- json_config/common.sh@26 -- # echo '' 00:04:43.915 15:50:37 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:43.915 INFO: Checking if target configuration is the same... 00:04:43.915 15:50:37 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:43.915 15:50:37 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:43.915 15:50:37 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:43.915 15:50:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:43.915 + '[' 2 -ne 2 ']' 00:04:43.915 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:43.915 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:43.915 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:43.915 +++ basename /dev/fd/62 00:04:43.915 ++ mktemp /tmp/62.XXX 00:04:43.915 + tmp_file_1=/tmp/62.H56 00:04:43.915 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:43.915 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:43.915 + tmp_file_2=/tmp/spdk_tgt_config.json.AOh 00:04:43.915 + ret=0 00:04:43.915 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:44.173 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:44.173 + diff -u /tmp/62.H56 /tmp/spdk_tgt_config.json.AOh 00:04:44.173 INFO: JSON config files are the same 00:04:44.173 + echo 'INFO: JSON config files are the same' 00:04:44.173 + rm /tmp/62.H56 /tmp/spdk_tgt_config.json.AOh 00:04:44.173 + exit 0 00:04:44.173 15:50:37 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:44.173 15:50:37 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:44.173 INFO: changing configuration and checking if this can be detected... 00:04:44.173 15:50:37 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:44.173 15:50:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:44.741 15:50:38 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:44.741 15:50:38 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:44.741 15:50:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:44.741 + '[' 2 -ne 2 ']' 00:04:44.741 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:44.741 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:44.741 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:44.741 +++ basename /dev/fd/62 00:04:44.741 ++ mktemp /tmp/62.XXX 00:04:44.741 + tmp_file_1=/tmp/62.txZ 00:04:44.741 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:44.741 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:44.741 + tmp_file_2=/tmp/spdk_tgt_config.json.uj5 00:04:44.741 + ret=0 00:04:44.741 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:44.999 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:44.999 + diff -u /tmp/62.txZ /tmp/spdk_tgt_config.json.uj5 00:04:44.999 + ret=1 00:04:44.999 + echo '=== Start of file: /tmp/62.txZ ===' 00:04:44.999 + cat /tmp/62.txZ 00:04:44.999 + echo '=== End of file: /tmp/62.txZ ===' 00:04:44.999 + echo '' 00:04:44.999 + echo '=== Start of file: /tmp/spdk_tgt_config.json.uj5 ===' 00:04:44.999 + cat /tmp/spdk_tgt_config.json.uj5 00:04:44.999 + echo '=== End of file: /tmp/spdk_tgt_config.json.uj5 ===' 00:04:44.999 + echo '' 00:04:44.999 + rm /tmp/62.txZ /tmp/spdk_tgt_config.json.uj5 00:04:44.999 + exit 1 00:04:45.000 INFO: configuration change detected. 00:04:45.000 15:50:38 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:45.000 15:50:38 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:45.000 15:50:38 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:45.000 15:50:38 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:45.000 15:50:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.000 15:50:38 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:45.000 15:50:38 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:45.000 15:50:38 json_config -- json_config/json_config.sh@317 -- # [[ -n 61593 ]] 00:04:45.000 15:50:38 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:45.000 15:50:38 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:45.000 15:50:38 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:45.000 15:50:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.000 15:50:38 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:45.000 15:50:38 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:45.000 15:50:38 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:45.000 15:50:38 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:45.000 15:50:38 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:45.000 15:50:38 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:45.000 15:50:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:45.000 15:50:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.258 15:50:38 json_config -- json_config/json_config.sh@323 -- # killprocess 61593 00:04:45.258 15:50:38 json_config -- common/autotest_common.sh@948 -- # '[' -z 61593 ']' 00:04:45.258 15:50:38 json_config -- common/autotest_common.sh@952 -- # kill -0 61593 00:04:45.258 15:50:38 json_config -- common/autotest_common.sh@953 -- # uname 00:04:45.258 15:50:38 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:45.258 15:50:38 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61593 00:04:45.258 15:50:38 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:45.258 killing process with pid 61593 00:04:45.258 15:50:38 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:45.258 15:50:38 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61593' 00:04:45.258 15:50:38 json_config -- common/autotest_common.sh@967 -- # kill 61593 00:04:45.258 15:50:38 json_config -- common/autotest_common.sh@972 -- # wait 61593 00:04:45.516 15:50:39 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:45.516 15:50:39 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:45.516 15:50:39 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:45.516 15:50:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.516 15:50:39 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:45.516 INFO: Success 00:04:45.516 15:50:39 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:45.516 00:04:45.516 real 0m8.717s 00:04:45.516 user 0m12.507s 00:04:45.516 sys 0m1.965s 00:04:45.516 15:50:39 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.516 15:50:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.516 ************************************ 00:04:45.516 END TEST json_config 00:04:45.516 ************************************ 00:04:45.516 15:50:39 -- common/autotest_common.sh@1142 -- # return 0 00:04:45.516 15:50:39 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:45.516 15:50:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:45.516 15:50:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.516 15:50:39 -- common/autotest_common.sh@10 -- # set +x 00:04:45.516 ************************************ 00:04:45.516 START TEST json_config_extra_key 00:04:45.516 ************************************ 00:04:45.516 15:50:39 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:45.516 15:50:39 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:45.516 15:50:39 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:45.516 15:50:39 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:45.516 15:50:39 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:45.516 15:50:39 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:45.516 15:50:39 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:45.516 15:50:39 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:45.516 15:50:39 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:45.516 15:50:39 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:45.516 15:50:39 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:45.516 15:50:39 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:45.516 15:50:39 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:45.516 15:50:39 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:04:45.516 15:50:39 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:04:45.516 15:50:39 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:45.516 15:50:39 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:45.516 15:50:39 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:45.516 15:50:39 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:45.516 15:50:39 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:45.516 15:50:39 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:45.517 15:50:39 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:45.517 15:50:39 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:45.517 15:50:39 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.517 15:50:39 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.517 15:50:39 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.517 15:50:39 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:45.517 15:50:39 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.517 15:50:39 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:45.517 15:50:39 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:45.517 15:50:39 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:45.517 15:50:39 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:45.517 15:50:39 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:45.517 15:50:39 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:45.517 15:50:39 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:45.517 15:50:39 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:45.517 15:50:39 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:45.517 15:50:39 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:45.517 15:50:39 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:45.517 15:50:39 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:45.517 15:50:39 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:45.517 15:50:39 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:45.517 15:50:39 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:45.517 15:50:39 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:45.517 15:50:39 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:45.517 INFO: launching applications... 00:04:45.517 15:50:39 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:45.517 15:50:39 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:45.517 15:50:39 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:45.517 15:50:39 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:45.517 15:50:39 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:45.517 15:50:39 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:45.517 15:50:39 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:45.517 15:50:39 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:45.517 15:50:39 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:45.517 15:50:39 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:45.517 15:50:39 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:45.517 15:50:39 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=61769 00:04:45.517 Waiting for target to run... 00:04:45.517 15:50:39 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:45.517 15:50:39 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 61769 /var/tmp/spdk_tgt.sock 00:04:45.517 15:50:39 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 61769 ']' 00:04:45.517 15:50:39 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:45.517 15:50:39 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:45.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:45.517 15:50:39 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:45.517 15:50:39 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:45.517 15:50:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:45.517 15:50:39 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:45.775 [2024-07-15 15:50:39.261719] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:04:45.775 [2024-07-15 15:50:39.261828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61769 ] 00:04:46.045 [2024-07-15 15:50:39.677366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.314 [2024-07-15 15:50:39.794210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.573 15:50:40 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:46.573 00:04:46.573 15:50:40 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:46.573 15:50:40 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:46.573 INFO: shutting down applications... 00:04:46.573 15:50:40 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:46.573 15:50:40 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:46.573 15:50:40 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:46.573 15:50:40 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:46.573 15:50:40 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 61769 ]] 00:04:46.573 15:50:40 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 61769 00:04:46.573 15:50:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:46.573 15:50:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.573 15:50:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61769 00:04:46.573 15:50:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:47.139 15:50:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:47.139 15:50:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.139 15:50:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61769 00:04:47.139 15:50:40 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:47.139 15:50:40 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:47.139 15:50:40 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:47.139 SPDK target shutdown done 00:04:47.139 15:50:40 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:47.139 Success 00:04:47.139 15:50:40 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:47.139 00:04:47.139 real 0m1.611s 00:04:47.139 user 0m1.518s 00:04:47.139 sys 0m0.420s 00:04:47.139 15:50:40 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.139 15:50:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:47.139 ************************************ 00:04:47.139 END TEST json_config_extra_key 00:04:47.139 ************************************ 00:04:47.139 15:50:40 -- common/autotest_common.sh@1142 -- # return 0 00:04:47.139 15:50:40 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:47.139 15:50:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.139 15:50:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.139 15:50:40 -- common/autotest_common.sh@10 -- # set +x 00:04:47.139 ************************************ 00:04:47.139 START TEST alias_rpc 00:04:47.139 ************************************ 00:04:47.139 15:50:40 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:47.139 * Looking for test storage... 00:04:47.397 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:47.397 15:50:40 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:47.397 15:50:40 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=61846 00:04:47.397 15:50:40 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 61846 00:04:47.397 15:50:40 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:47.397 15:50:40 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 61846 ']' 00:04:47.397 15:50:40 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.397 15:50:40 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:47.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.397 15:50:40 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.397 15:50:40 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:47.397 15:50:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.397 [2024-07-15 15:50:40.936580] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:04:47.397 [2024-07-15 15:50:40.936684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61846 ] 00:04:47.397 [2024-07-15 15:50:41.075649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.655 [2024-07-15 15:50:41.221150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.222 15:50:41 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:48.222 15:50:41 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:48.222 15:50:41 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:48.787 15:50:42 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 61846 00:04:48.787 15:50:42 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 61846 ']' 00:04:48.787 15:50:42 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 61846 00:04:48.787 15:50:42 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:48.787 15:50:42 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:48.787 15:50:42 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61846 00:04:48.787 15:50:42 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:48.787 killing process with pid 61846 00:04:48.787 15:50:42 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:48.787 15:50:42 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61846' 00:04:48.787 15:50:42 alias_rpc -- common/autotest_common.sh@967 -- # kill 61846 00:04:48.787 15:50:42 alias_rpc -- common/autotest_common.sh@972 -- # wait 61846 00:04:49.045 00:04:49.045 real 0m1.909s 00:04:49.045 user 0m2.232s 00:04:49.045 sys 0m0.432s 00:04:49.045 15:50:42 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.045 15:50:42 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.045 ************************************ 00:04:49.045 END TEST alias_rpc 00:04:49.045 ************************************ 00:04:49.045 15:50:42 -- common/autotest_common.sh@1142 -- # return 0 00:04:49.045 15:50:42 -- spdk/autotest.sh@176 -- # [[ 1 -eq 0 ]] 00:04:49.045 15:50:42 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:49.045 15:50:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.045 15:50:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.045 15:50:42 -- common/autotest_common.sh@10 -- # set +x 00:04:49.045 ************************************ 00:04:49.045 START TEST dpdk_mem_utility 00:04:49.045 ************************************ 00:04:49.045 15:50:42 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:49.303 * Looking for test storage... 00:04:49.303 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:49.303 15:50:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:49.303 15:50:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=61938 00:04:49.303 15:50:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 61938 00:04:49.303 15:50:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:49.303 15:50:42 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 61938 ']' 00:04:49.303 15:50:42 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.303 15:50:42 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:49.303 15:50:42 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.303 15:50:42 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:49.303 15:50:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:49.303 [2024-07-15 15:50:42.890505] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:04:49.303 [2024-07-15 15:50:42.890612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61938 ] 00:04:49.303 [2024-07-15 15:50:43.029575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.561 [2024-07-15 15:50:43.148866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.494 15:50:43 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:50.494 15:50:43 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:50.494 15:50:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:50.494 15:50:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:50.494 15:50:43 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.494 15:50:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:50.494 { 00:04:50.494 "filename": "/tmp/spdk_mem_dump.txt" 00:04:50.494 } 00:04:50.494 15:50:43 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.494 15:50:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:50.494 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:50.494 1 heaps totaling size 814.000000 MiB 00:04:50.494 size: 814.000000 MiB heap id: 0 00:04:50.495 end heaps---------- 00:04:50.495 8 mempools totaling size 598.116089 MiB 00:04:50.495 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:50.495 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:50.495 size: 84.521057 MiB name: bdev_io_61938 00:04:50.495 size: 51.011292 MiB name: evtpool_61938 00:04:50.495 size: 50.003479 MiB name: msgpool_61938 00:04:50.495 size: 21.763794 MiB name: PDU_Pool 00:04:50.495 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:50.495 size: 0.026123 MiB name: Session_Pool 00:04:50.495 end mempools------- 00:04:50.495 6 memzones totaling size 4.142822 MiB 00:04:50.495 size: 1.000366 MiB name: RG_ring_0_61938 00:04:50.495 size: 1.000366 MiB name: RG_ring_1_61938 00:04:50.495 size: 1.000366 MiB name: RG_ring_4_61938 00:04:50.495 size: 1.000366 MiB name: RG_ring_5_61938 00:04:50.495 size: 0.125366 MiB name: RG_ring_2_61938 00:04:50.495 size: 0.015991 MiB name: RG_ring_3_61938 00:04:50.495 end memzones------- 00:04:50.495 15:50:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:50.495 heap id: 0 total size: 814.000000 MiB number of busy elements: 218 number of free elements: 15 00:04:50.495 list of free elements. size: 12.486938 MiB 00:04:50.495 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:50.495 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:50.495 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:50.495 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:50.495 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:50.495 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:50.495 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:50.495 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:50.495 element at address: 0x200000200000 with size: 0.837036 MiB 00:04:50.495 element at address: 0x20001aa00000 with size: 0.572083 MiB 00:04:50.495 element at address: 0x20000b200000 with size: 0.489990 MiB 00:04:50.495 element at address: 0x200000800000 with size: 0.487061 MiB 00:04:50.495 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:50.495 element at address: 0x200027e00000 with size: 0.398315 MiB 00:04:50.495 element at address: 0x200003a00000 with size: 0.351685 MiB 00:04:50.495 list of standard malloc elements. size: 199.250488 MiB 00:04:50.495 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:50.495 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:50.495 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:50.495 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:50.495 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:50.495 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:50.495 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:50.495 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:50.495 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:50.495 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:04:50.495 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:50.495 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:50.495 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:04:50.495 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:04:50.495 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:04:50.495 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:04:50.495 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:04:50.495 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:04:50.495 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:04:50.495 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:04:50.495 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:04:50.495 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:50.495 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:50.495 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:50.495 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:50.495 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:50.495 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:50.495 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:50.495 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:50.495 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:50.495 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:50.495 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:50.495 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:50.495 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:50.495 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:50.495 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:50.495 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200027e65f80 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200027e66040 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200027e6cc40 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:04:50.495 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:50.496 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:50.496 list of memzone associated elements. size: 602.262573 MiB 00:04:50.496 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:50.496 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:50.496 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:50.496 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:50.496 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:50.496 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_61938_0 00:04:50.496 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:50.496 associated memzone info: size: 48.002930 MiB name: MP_evtpool_61938_0 00:04:50.496 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:50.496 associated memzone info: size: 48.002930 MiB name: MP_msgpool_61938_0 00:04:50.496 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:50.496 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:50.496 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:50.496 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:50.496 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:50.496 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_61938 00:04:50.496 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:50.496 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_61938 00:04:50.496 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:50.496 associated memzone info: size: 1.007996 MiB name: MP_evtpool_61938 00:04:50.496 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:50.496 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:50.496 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:50.496 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:50.496 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:50.496 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:50.496 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:50.496 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:50.496 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:50.496 associated memzone info: size: 1.000366 MiB name: RG_ring_0_61938 00:04:50.496 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:50.496 associated memzone info: size: 1.000366 MiB name: RG_ring_1_61938 00:04:50.496 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:50.496 associated memzone info: size: 1.000366 MiB name: RG_ring_4_61938 00:04:50.496 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:50.496 associated memzone info: size: 1.000366 MiB name: RG_ring_5_61938 00:04:50.496 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:50.496 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_61938 00:04:50.496 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:50.496 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:50.496 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:50.496 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:50.496 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:50.496 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:50.496 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:50.496 associated memzone info: size: 0.125366 MiB name: RG_ring_2_61938 00:04:50.496 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:50.496 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:50.496 element at address: 0x200027e66100 with size: 0.023743 MiB 00:04:50.496 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:50.496 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:50.496 associated memzone info: size: 0.015991 MiB name: RG_ring_3_61938 00:04:50.496 element at address: 0x200027e6c240 with size: 0.002441 MiB 00:04:50.496 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:50.496 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:04:50.496 associated memzone info: size: 0.000183 MiB name: MP_msgpool_61938 00:04:50.496 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:50.496 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_61938 00:04:50.496 element at address: 0x200027e6cd00 with size: 0.000305 MiB 00:04:50.496 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:50.496 15:50:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:50.496 15:50:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 61938 00:04:50.496 15:50:44 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 61938 ']' 00:04:50.496 15:50:44 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 61938 00:04:50.496 15:50:44 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:50.496 15:50:44 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:50.496 15:50:44 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61938 00:04:50.496 15:50:44 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:50.496 killing process with pid 61938 00:04:50.496 15:50:44 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:50.496 15:50:44 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61938' 00:04:50.496 15:50:44 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 61938 00:04:50.496 15:50:44 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 61938 00:04:50.754 00:04:50.754 real 0m1.679s 00:04:50.754 user 0m1.815s 00:04:50.754 sys 0m0.430s 00:04:50.754 15:50:44 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.754 15:50:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:50.754 ************************************ 00:04:50.754 END TEST dpdk_mem_utility 00:04:50.754 ************************************ 00:04:50.754 15:50:44 -- common/autotest_common.sh@1142 -- # return 0 00:04:50.754 15:50:44 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:50.754 15:50:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.754 15:50:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.754 15:50:44 -- common/autotest_common.sh@10 -- # set +x 00:04:50.754 ************************************ 00:04:50.754 START TEST event 00:04:50.754 ************************************ 00:04:50.754 15:50:44 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:51.013 * Looking for test storage... 00:04:51.013 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:51.013 15:50:44 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:51.013 15:50:44 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:51.013 15:50:44 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:51.013 15:50:44 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:51.013 15:50:44 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.013 15:50:44 event -- common/autotest_common.sh@10 -- # set +x 00:04:51.013 ************************************ 00:04:51.013 START TEST event_perf 00:04:51.013 ************************************ 00:04:51.013 15:50:44 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:51.013 Running I/O for 1 seconds...[2024-07-15 15:50:44.585571] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:04:51.013 [2024-07-15 15:50:44.585664] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62027 ] 00:04:51.013 [2024-07-15 15:50:44.724110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:51.271 [2024-07-15 15:50:44.849784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.271 [2024-07-15 15:50:44.849937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:51.271 [2024-07-15 15:50:44.850030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:51.271 Running I/O for 1 seconds...[2024-07-15 15:50:44.850332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.241 00:04:52.241 lcore 0: 181073 00:04:52.241 lcore 1: 181074 00:04:52.241 lcore 2: 181075 00:04:52.241 lcore 3: 181077 00:04:52.241 done. 00:04:52.241 00:04:52.241 real 0m1.373s 00:04:52.241 user 0m4.190s 00:04:52.241 sys 0m0.061s 00:04:52.241 15:50:45 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.241 15:50:45 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:52.241 ************************************ 00:04:52.241 END TEST event_perf 00:04:52.241 ************************************ 00:04:52.499 15:50:45 event -- common/autotest_common.sh@1142 -- # return 0 00:04:52.499 15:50:45 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:52.499 15:50:45 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:52.499 15:50:45 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.499 15:50:45 event -- common/autotest_common.sh@10 -- # set +x 00:04:52.499 ************************************ 00:04:52.499 START TEST event_reactor 00:04:52.499 ************************************ 00:04:52.499 15:50:45 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:52.499 [2024-07-15 15:50:46.006081] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:04:52.499 [2024-07-15 15:50:46.006202] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62066 ] 00:04:52.499 [2024-07-15 15:50:46.149309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.756 [2024-07-15 15:50:46.268248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.685 test_start 00:04:53.685 oneshot 00:04:53.685 tick 100 00:04:53.685 tick 100 00:04:53.685 tick 250 00:04:53.685 tick 100 00:04:53.685 tick 100 00:04:53.685 tick 100 00:04:53.685 tick 250 00:04:53.685 tick 500 00:04:53.685 tick 100 00:04:53.685 tick 100 00:04:53.685 tick 250 00:04:53.685 tick 100 00:04:53.685 tick 100 00:04:53.685 test_end 00:04:53.685 00:04:53.685 real 0m1.369s 00:04:53.685 user 0m1.209s 00:04:53.685 sys 0m0.054s 00:04:53.685 15:50:47 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.685 15:50:47 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:53.685 ************************************ 00:04:53.685 END TEST event_reactor 00:04:53.685 ************************************ 00:04:53.685 15:50:47 event -- common/autotest_common.sh@1142 -- # return 0 00:04:53.685 15:50:47 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:53.685 15:50:47 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:53.685 15:50:47 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.685 15:50:47 event -- common/autotest_common.sh@10 -- # set +x 00:04:53.685 ************************************ 00:04:53.685 START TEST event_reactor_perf 00:04:53.685 ************************************ 00:04:53.685 15:50:47 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:53.941 [2024-07-15 15:50:47.421586] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:04:53.941 [2024-07-15 15:50:47.421687] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62096 ] 00:04:53.941 [2024-07-15 15:50:47.552894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.941 [2024-07-15 15:50:47.670576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.312 test_start 00:04:55.312 test_end 00:04:55.312 Performance: 372930 events per second 00:04:55.312 00:04:55.312 real 0m1.352s 00:04:55.312 user 0m1.195s 00:04:55.312 sys 0m0.051s 00:04:55.312 15:50:48 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.312 15:50:48 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:55.312 ************************************ 00:04:55.312 END TEST event_reactor_perf 00:04:55.312 ************************************ 00:04:55.312 15:50:48 event -- common/autotest_common.sh@1142 -- # return 0 00:04:55.312 15:50:48 event -- event/event.sh@49 -- # uname -s 00:04:55.312 15:50:48 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:55.312 15:50:48 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:55.312 15:50:48 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.312 15:50:48 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.312 15:50:48 event -- common/autotest_common.sh@10 -- # set +x 00:04:55.312 ************************************ 00:04:55.312 START TEST event_scheduler 00:04:55.312 ************************************ 00:04:55.312 15:50:48 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:55.312 * Looking for test storage... 00:04:55.312 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:55.312 15:50:48 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:55.312 15:50:48 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=62163 00:04:55.312 15:50:48 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:55.312 15:50:48 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:55.312 15:50:48 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 62163 00:04:55.312 15:50:48 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 62163 ']' 00:04:55.312 15:50:48 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.312 15:50:48 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:55.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.312 15:50:48 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.312 15:50:48 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:55.312 15:50:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:55.312 [2024-07-15 15:50:48.955327] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:04:55.312 [2024-07-15 15:50:48.955487] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62163 ] 00:04:55.571 [2024-07-15 15:50:49.104522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:55.571 [2024-07-15 15:50:49.239765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.571 [2024-07-15 15:50:49.239913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.571 [2024-07-15 15:50:49.240047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:55.571 [2024-07-15 15:50:49.240049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:56.549 15:50:49 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:56.549 15:50:49 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:04:56.549 15:50:49 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:56.549 15:50:49 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.549 15:50:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:56.549 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:56.549 POWER: Cannot set governor of lcore 0 to userspace 00:04:56.549 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:56.549 POWER: Cannot set governor of lcore 0 to performance 00:04:56.549 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:56.549 POWER: Cannot set governor of lcore 0 to userspace 00:04:56.549 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:56.549 POWER: Cannot set governor of lcore 0 to userspace 00:04:56.549 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:56.549 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:56.549 POWER: Unable to set Power Management Environment for lcore 0 00:04:56.549 [2024-07-15 15:50:49.998060] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:04:56.549 [2024-07-15 15:50:49.998444] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:04:56.549 [2024-07-15 15:50:49.998455] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:56.549 [2024-07-15 15:50:49.998467] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:56.549 [2024-07-15 15:50:49.998474] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:56.549 [2024-07-15 15:50:49.998483] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:56.549 15:50:50 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.549 15:50:50 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:56.549 15:50:50 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.549 15:50:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:56.549 [2024-07-15 15:50:50.135149] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:56.549 15:50:50 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.549 15:50:50 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:56.549 15:50:50 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.549 15:50:50 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.549 15:50:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:56.549 ************************************ 00:04:56.549 START TEST scheduler_create_thread 00:04:56.549 ************************************ 00:04:56.549 15:50:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:04:56.549 15:50:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:56.549 15:50:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.549 15:50:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.549 2 00:04:56.549 15:50:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.550 3 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.550 4 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.550 5 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.550 6 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.550 7 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.550 8 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.550 9 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.550 10 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.550 15:50:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.453 15:50:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.453 15:50:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:58.453 15:50:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:58.453 15:50:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.453 15:50:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.389 15:50:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.389 00:04:59.389 real 0m2.615s 00:04:59.389 user 0m0.013s 00:04:59.389 sys 0m0.007s 00:04:59.389 15:50:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.389 15:50:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.389 ************************************ 00:04:59.389 END TEST scheduler_create_thread 00:04:59.390 ************************************ 00:04:59.390 15:50:52 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:04:59.390 15:50:52 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:59.390 15:50:52 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 62163 00:04:59.390 15:50:52 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 62163 ']' 00:04:59.390 15:50:52 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 62163 00:04:59.390 15:50:52 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:04:59.390 15:50:52 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:59.390 15:50:52 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62163 00:04:59.390 killing process with pid 62163 00:04:59.390 15:50:52 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:59.390 15:50:52 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:59.390 15:50:52 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62163' 00:04:59.390 15:50:52 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 62163 00:04:59.390 15:50:52 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 62163 00:04:59.647 [2024-07-15 15:50:53.241123] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:59.905 00:04:59.905 real 0m4.787s 00:04:59.905 user 0m8.925s 00:04:59.905 sys 0m0.445s 00:04:59.905 15:50:53 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.905 ************************************ 00:04:59.905 END TEST event_scheduler 00:04:59.905 ************************************ 00:04:59.905 15:50:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:59.905 15:50:53 event -- common/autotest_common.sh@1142 -- # return 0 00:04:59.905 15:50:53 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:00.163 15:50:53 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:00.163 15:50:53 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.163 15:50:53 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.163 15:50:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.163 ************************************ 00:05:00.163 START TEST app_repeat 00:05:00.163 ************************************ 00:05:00.163 15:50:53 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:00.163 15:50:53 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.163 15:50:53 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.163 15:50:53 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:00.163 15:50:53 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.163 15:50:53 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:00.163 15:50:53 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:00.163 15:50:53 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:00.163 Process app_repeat pid: 62275 00:05:00.163 15:50:53 event.app_repeat -- event/event.sh@19 -- # repeat_pid=62275 00:05:00.163 15:50:53 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:00.163 15:50:53 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.163 15:50:53 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 62275' 00:05:00.163 15:50:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:00.163 spdk_app_start Round 0 00:05:00.163 15:50:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:00.163 15:50:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62275 /var/tmp/spdk-nbd.sock 00:05:00.163 15:50:53 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62275 ']' 00:05:00.163 15:50:53 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:00.163 15:50:53 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:00.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:00.163 15:50:53 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:00.163 15:50:53 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:00.163 15:50:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:00.163 [2024-07-15 15:50:53.677669] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:05:00.163 [2024-07-15 15:50:53.677766] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62275 ] 00:05:00.163 [2024-07-15 15:50:53.815520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:00.422 [2024-07-15 15:50:53.952202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.422 [2024-07-15 15:50:53.952218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.988 15:50:54 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:00.988 15:50:54 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:00.988 15:50:54 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:01.554 Malloc0 00:05:01.554 15:50:55 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:01.813 Malloc1 00:05:01.813 15:50:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:01.813 15:50:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.813 15:50:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.813 15:50:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:01.813 15:50:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.813 15:50:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:01.813 15:50:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:01.813 15:50:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.813 15:50:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.813 15:50:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:01.813 15:50:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.813 15:50:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:01.813 15:50:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:01.813 15:50:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:01.813 15:50:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.813 15:50:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:02.071 /dev/nbd0 00:05:02.071 15:50:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:02.071 15:50:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:02.071 15:50:55 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:02.071 15:50:55 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:02.071 15:50:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:02.071 15:50:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:02.071 15:50:55 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:02.071 15:50:55 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:02.071 15:50:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:02.071 15:50:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:02.071 15:50:55 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:02.071 1+0 records in 00:05:02.071 1+0 records out 00:05:02.072 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346761 s, 11.8 MB/s 00:05:02.072 15:50:55 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:02.072 15:50:55 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:02.072 15:50:55 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:02.072 15:50:55 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:02.072 15:50:55 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:02.072 15:50:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:02.072 15:50:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.072 15:50:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:02.330 /dev/nbd1 00:05:02.330 15:50:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:02.330 15:50:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:02.330 15:50:55 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:02.330 15:50:55 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:02.330 15:50:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:02.330 15:50:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:02.330 15:50:55 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:02.330 15:50:55 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:02.330 15:50:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:02.330 15:50:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:02.330 15:50:55 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:02.330 1+0 records in 00:05:02.330 1+0 records out 00:05:02.330 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427563 s, 9.6 MB/s 00:05:02.330 15:50:55 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:02.330 15:50:55 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:02.330 15:50:55 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:02.330 15:50:55 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:02.330 15:50:55 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:02.330 15:50:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:02.330 15:50:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.330 15:50:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:02.330 15:50:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.330 15:50:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:02.588 15:50:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:02.588 { 00:05:02.588 "bdev_name": "Malloc0", 00:05:02.588 "nbd_device": "/dev/nbd0" 00:05:02.588 }, 00:05:02.588 { 00:05:02.588 "bdev_name": "Malloc1", 00:05:02.588 "nbd_device": "/dev/nbd1" 00:05:02.588 } 00:05:02.588 ]' 00:05:02.588 15:50:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:02.588 { 00:05:02.588 "bdev_name": "Malloc0", 00:05:02.588 "nbd_device": "/dev/nbd0" 00:05:02.588 }, 00:05:02.588 { 00:05:02.588 "bdev_name": "Malloc1", 00:05:02.588 "nbd_device": "/dev/nbd1" 00:05:02.588 } 00:05:02.588 ]' 00:05:02.588 15:50:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:02.588 15:50:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:02.588 /dev/nbd1' 00:05:02.588 15:50:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:02.588 /dev/nbd1' 00:05:02.588 15:50:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:02.588 15:50:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:02.588 15:50:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:02.588 15:50:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:02.588 15:50:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:02.588 15:50:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:02.588 15:50:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.588 15:50:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:02.588 15:50:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:02.588 15:50:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:02.588 15:50:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:02.588 15:50:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:02.588 256+0 records in 00:05:02.588 256+0 records out 00:05:02.588 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00786824 s, 133 MB/s 00:05:02.588 15:50:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:02.588 15:50:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:02.588 256+0 records in 00:05:02.588 256+0 records out 00:05:02.588 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0268212 s, 39.1 MB/s 00:05:02.588 15:50:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:02.588 15:50:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:02.588 256+0 records in 00:05:02.588 256+0 records out 00:05:02.588 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0300265 s, 34.9 MB/s 00:05:02.588 15:50:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:02.588 15:50:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.588 15:50:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:02.588 15:50:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:02.588 15:50:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:02.588 15:50:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:02.588 15:50:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:02.588 15:50:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:02.588 15:50:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:02.847 15:50:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:02.847 15:50:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:02.847 15:50:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:02.847 15:50:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:02.847 15:50:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.847 15:50:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.847 15:50:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:02.847 15:50:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:02.847 15:50:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:02.847 15:50:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:02.847 15:50:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:02.847 15:50:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:02.847 15:50:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:02.847 15:50:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:02.847 15:50:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:02.847 15:50:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:03.105 15:50:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:03.105 15:50:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:03.105 15:50:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:03.105 15:50:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:03.105 15:50:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:03.105 15:50:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:03.105 15:50:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:03.105 15:50:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:03.105 15:50:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:03.105 15:50:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:03.105 15:50:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:03.105 15:50:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:03.364 15:50:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:03.364 15:50:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.364 15:50:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:03.364 15:50:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:03.364 15:50:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:03.364 15:50:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:03.623 15:50:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:03.623 15:50:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:03.623 15:50:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:03.623 15:50:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:03.623 15:50:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:03.623 15:50:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:03.623 15:50:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:03.623 15:50:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:03.623 15:50:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:03.623 15:50:57 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:03.882 15:50:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:04.140 [2024-07-15 15:50:57.649008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:04.140 [2024-07-15 15:50:57.766150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.140 [2024-07-15 15:50:57.766161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.140 [2024-07-15 15:50:57.819767] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:04.140 [2024-07-15 15:50:57.819858] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:07.421 15:51:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:07.421 spdk_app_start Round 1 00:05:07.421 15:51:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:07.421 15:51:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62275 /var/tmp/spdk-nbd.sock 00:05:07.421 15:51:00 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62275 ']' 00:05:07.421 15:51:00 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:07.421 15:51:00 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:07.421 15:51:00 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:07.421 15:51:00 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.421 15:51:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:07.421 15:51:00 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.421 15:51:00 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:07.421 15:51:00 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:07.421 Malloc0 00:05:07.421 15:51:01 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:07.699 Malloc1 00:05:07.699 15:51:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:07.699 15:51:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.699 15:51:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:07.699 15:51:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:07.699 15:51:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.699 15:51:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:07.699 15:51:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:07.699 15:51:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.699 15:51:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:07.699 15:51:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:07.699 15:51:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.699 15:51:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:07.699 15:51:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:07.699 15:51:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:07.699 15:51:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.699 15:51:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:07.965 /dev/nbd0 00:05:07.965 15:51:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:07.965 15:51:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:07.965 15:51:01 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:07.965 15:51:01 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:07.965 15:51:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:07.965 15:51:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:07.965 15:51:01 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:07.965 15:51:01 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:07.965 15:51:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:07.965 15:51:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:07.965 15:51:01 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:07.965 1+0 records in 00:05:07.965 1+0 records out 00:05:07.965 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434789 s, 9.4 MB/s 00:05:07.965 15:51:01 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:07.965 15:51:01 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:07.965 15:51:01 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:07.965 15:51:01 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:07.965 15:51:01 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:07.965 15:51:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:07.965 15:51:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.965 15:51:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:08.223 /dev/nbd1 00:05:08.223 15:51:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:08.223 15:51:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:08.223 15:51:01 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:08.223 15:51:01 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:08.223 15:51:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:08.223 15:51:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:08.223 15:51:01 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:08.223 15:51:01 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:08.223 15:51:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:08.223 15:51:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:08.223 15:51:01 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:08.223 1+0 records in 00:05:08.223 1+0 records out 00:05:08.223 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000455288 s, 9.0 MB/s 00:05:08.223 15:51:01 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:08.223 15:51:01 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:08.223 15:51:01 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:08.223 15:51:01 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:08.223 15:51:01 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:08.223 15:51:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:08.223 15:51:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.223 15:51:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:08.223 15:51:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.223 15:51:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:08.480 15:51:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:08.480 { 00:05:08.480 "bdev_name": "Malloc0", 00:05:08.480 "nbd_device": "/dev/nbd0" 00:05:08.480 }, 00:05:08.480 { 00:05:08.480 "bdev_name": "Malloc1", 00:05:08.480 "nbd_device": "/dev/nbd1" 00:05:08.480 } 00:05:08.480 ]' 00:05:08.480 15:51:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:08.480 15:51:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:08.480 { 00:05:08.480 "bdev_name": "Malloc0", 00:05:08.480 "nbd_device": "/dev/nbd0" 00:05:08.480 }, 00:05:08.480 { 00:05:08.480 "bdev_name": "Malloc1", 00:05:08.480 "nbd_device": "/dev/nbd1" 00:05:08.480 } 00:05:08.480 ]' 00:05:08.738 15:51:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:08.738 /dev/nbd1' 00:05:08.738 15:51:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:08.738 15:51:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:08.738 /dev/nbd1' 00:05:08.738 15:51:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:08.738 15:51:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:08.738 15:51:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:08.738 15:51:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:08.738 15:51:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:08.738 15:51:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.738 15:51:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:08.738 15:51:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:08.738 15:51:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:08.738 15:51:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:08.738 15:51:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:08.738 256+0 records in 00:05:08.738 256+0 records out 00:05:08.738 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00728415 s, 144 MB/s 00:05:08.738 15:51:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:08.738 15:51:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:08.738 256+0 records in 00:05:08.738 256+0 records out 00:05:08.738 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254177 s, 41.3 MB/s 00:05:08.738 15:51:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:08.738 15:51:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:08.738 256+0 records in 00:05:08.738 256+0 records out 00:05:08.738 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0302585 s, 34.7 MB/s 00:05:08.738 15:51:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:08.738 15:51:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.738 15:51:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:08.738 15:51:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:08.738 15:51:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:08.738 15:51:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:08.738 15:51:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:08.738 15:51:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:08.738 15:51:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:08.738 15:51:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:08.738 15:51:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:08.738 15:51:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:08.738 15:51:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:08.738 15:51:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.738 15:51:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.738 15:51:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:08.738 15:51:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:08.738 15:51:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:08.738 15:51:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:08.996 15:51:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:08.996 15:51:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:08.996 15:51:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:08.996 15:51:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:08.996 15:51:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:08.996 15:51:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:08.996 15:51:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:08.996 15:51:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:08.996 15:51:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:08.996 15:51:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:09.254 15:51:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:09.254 15:51:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:09.254 15:51:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:09.254 15:51:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:09.254 15:51:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:09.254 15:51:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:09.254 15:51:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:09.254 15:51:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:09.254 15:51:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:09.254 15:51:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.254 15:51:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:09.511 15:51:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:09.511 15:51:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:09.511 15:51:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:09.768 15:51:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:09.768 15:51:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:09.768 15:51:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:09.768 15:51:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:09.768 15:51:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:09.768 15:51:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:09.768 15:51:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:09.768 15:51:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:09.768 15:51:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:09.768 15:51:03 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:10.024 15:51:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:10.283 [2024-07-15 15:51:03.758929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:10.283 [2024-07-15 15:51:03.879279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.283 [2024-07-15 15:51:03.879289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.283 [2024-07-15 15:51:03.938562] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:10.283 [2024-07-15 15:51:03.938648] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:13.561 15:51:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:13.561 spdk_app_start Round 2 00:05:13.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:13.561 15:51:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:13.561 15:51:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62275 /var/tmp/spdk-nbd.sock 00:05:13.561 15:51:06 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62275 ']' 00:05:13.561 15:51:06 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:13.561 15:51:06 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:13.561 15:51:06 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:13.561 15:51:06 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:13.561 15:51:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:13.561 15:51:06 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.561 15:51:06 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:13.561 15:51:06 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:13.561 Malloc0 00:05:13.561 15:51:07 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:13.818 Malloc1 00:05:13.818 15:51:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:13.818 15:51:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.818 15:51:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.818 15:51:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:13.818 15:51:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.818 15:51:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:13.818 15:51:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:13.818 15:51:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.818 15:51:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.818 15:51:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:13.818 15:51:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.818 15:51:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:13.818 15:51:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:13.818 15:51:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:13.818 15:51:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.818 15:51:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:14.075 /dev/nbd0 00:05:14.075 15:51:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:14.075 15:51:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:14.075 15:51:07 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:14.075 15:51:07 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:14.075 15:51:07 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:14.075 15:51:07 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:14.075 15:51:07 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:14.075 15:51:07 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:14.075 15:51:07 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:14.075 15:51:07 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:14.075 15:51:07 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.075 1+0 records in 00:05:14.075 1+0 records out 00:05:14.075 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275628 s, 14.9 MB/s 00:05:14.075 15:51:07 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.075 15:51:07 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:14.075 15:51:07 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.075 15:51:07 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:14.075 15:51:07 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:14.075 15:51:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.075 15:51:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.075 15:51:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:14.332 /dev/nbd1 00:05:14.332 15:51:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:14.332 15:51:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:14.332 15:51:08 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:14.332 15:51:08 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:14.332 15:51:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:14.332 15:51:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:14.332 15:51:08 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:14.332 15:51:08 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:14.332 15:51:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:14.332 15:51:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:14.332 15:51:08 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.332 1+0 records in 00:05:14.332 1+0 records out 00:05:14.332 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426 s, 9.6 MB/s 00:05:14.333 15:51:08 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.333 15:51:08 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:14.333 15:51:08 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.333 15:51:08 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:14.333 15:51:08 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:14.333 15:51:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.333 15:51:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.333 15:51:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:14.333 15:51:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.333 15:51:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:14.902 { 00:05:14.902 "bdev_name": "Malloc0", 00:05:14.902 "nbd_device": "/dev/nbd0" 00:05:14.902 }, 00:05:14.902 { 00:05:14.902 "bdev_name": "Malloc1", 00:05:14.902 "nbd_device": "/dev/nbd1" 00:05:14.902 } 00:05:14.902 ]' 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:14.902 { 00:05:14.902 "bdev_name": "Malloc0", 00:05:14.902 "nbd_device": "/dev/nbd0" 00:05:14.902 }, 00:05:14.902 { 00:05:14.902 "bdev_name": "Malloc1", 00:05:14.902 "nbd_device": "/dev/nbd1" 00:05:14.902 } 00:05:14.902 ]' 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:14.902 /dev/nbd1' 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:14.902 /dev/nbd1' 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:14.902 256+0 records in 00:05:14.902 256+0 records out 00:05:14.902 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00585323 s, 179 MB/s 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:14.902 256+0 records in 00:05:14.902 256+0 records out 00:05:14.902 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027643 s, 37.9 MB/s 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:14.902 256+0 records in 00:05:14.902 256+0 records out 00:05:14.902 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0317857 s, 33.0 MB/s 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:14.902 15:51:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:15.159 15:51:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:15.159 15:51:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:15.160 15:51:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:15.160 15:51:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.160 15:51:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.160 15:51:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:15.160 15:51:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:15.160 15:51:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.160 15:51:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.160 15:51:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:15.418 15:51:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:15.418 15:51:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:15.418 15:51:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:15.418 15:51:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.418 15:51:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.418 15:51:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:15.418 15:51:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:15.418 15:51:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.418 15:51:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.418 15:51:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.418 15:51:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.676 15:51:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:15.676 15:51:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:15.676 15:51:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.934 15:51:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:15.934 15:51:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:15.934 15:51:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.934 15:51:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:15.934 15:51:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:15.934 15:51:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:15.934 15:51:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:15.934 15:51:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:15.934 15:51:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:15.934 15:51:09 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:16.226 15:51:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:16.497 [2024-07-15 15:51:10.088381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:16.755 [2024-07-15 15:51:10.241719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.755 [2024-07-15 15:51:10.241740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.755 [2024-07-15 15:51:10.319373] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:16.755 [2024-07-15 15:51:10.319449] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:19.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:19.293 15:51:12 event.app_repeat -- event/event.sh@38 -- # waitforlisten 62275 /var/tmp/spdk-nbd.sock 00:05:19.293 15:51:12 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62275 ']' 00:05:19.293 15:51:12 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:19.293 15:51:12 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.293 15:51:12 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:19.293 15:51:12 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.293 15:51:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:19.552 15:51:13 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.552 15:51:13 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:19.552 15:51:13 event.app_repeat -- event/event.sh@39 -- # killprocess 62275 00:05:19.552 15:51:13 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 62275 ']' 00:05:19.552 15:51:13 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 62275 00:05:19.552 15:51:13 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:19.552 15:51:13 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:19.552 15:51:13 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62275 00:05:19.552 killing process with pid 62275 00:05:19.552 15:51:13 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:19.552 15:51:13 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:19.552 15:51:13 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62275' 00:05:19.552 15:51:13 event.app_repeat -- common/autotest_common.sh@967 -- # kill 62275 00:05:19.552 15:51:13 event.app_repeat -- common/autotest_common.sh@972 -- # wait 62275 00:05:19.810 spdk_app_start is called in Round 0. 00:05:19.810 Shutdown signal received, stop current app iteration 00:05:19.810 Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 reinitialization... 00:05:19.810 spdk_app_start is called in Round 1. 00:05:19.810 Shutdown signal received, stop current app iteration 00:05:19.810 Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 reinitialization... 00:05:19.810 spdk_app_start is called in Round 2. 00:05:19.810 Shutdown signal received, stop current app iteration 00:05:19.810 Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 reinitialization... 00:05:19.810 spdk_app_start is called in Round 3. 00:05:19.810 Shutdown signal received, stop current app iteration 00:05:19.810 15:51:13 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:19.810 15:51:13 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:19.810 00:05:19.810 real 0m19.763s 00:05:19.810 user 0m44.134s 00:05:19.810 sys 0m3.244s 00:05:19.810 15:51:13 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.810 15:51:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:19.810 ************************************ 00:05:19.810 END TEST app_repeat 00:05:19.810 ************************************ 00:05:19.810 15:51:13 event -- common/autotest_common.sh@1142 -- # return 0 00:05:19.810 15:51:13 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:19.810 15:51:13 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:19.810 15:51:13 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:19.810 15:51:13 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.810 15:51:13 event -- common/autotest_common.sh@10 -- # set +x 00:05:19.810 ************************************ 00:05:19.810 START TEST cpu_locks 00:05:19.810 ************************************ 00:05:19.810 15:51:13 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:20.068 * Looking for test storage... 00:05:20.068 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:20.068 15:51:13 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:20.068 15:51:13 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:20.068 15:51:13 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:20.068 15:51:13 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:20.068 15:51:13 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.068 15:51:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.068 15:51:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.068 ************************************ 00:05:20.068 START TEST default_locks 00:05:20.068 ************************************ 00:05:20.068 15:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:20.068 15:51:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=62916 00:05:20.068 15:51:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:20.069 15:51:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 62916 00:05:20.069 15:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 62916 ']' 00:05:20.069 15:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.069 15:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.069 15:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.069 15:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.069 15:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.069 [2024-07-15 15:51:13.633236] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:05:20.069 [2024-07-15 15:51:13.633349] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62916 ] 00:05:20.069 [2024-07-15 15:51:13.771436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.326 [2024-07-15 15:51:13.938899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.259 15:51:14 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.259 15:51:14 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:21.259 15:51:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 62916 00:05:21.259 15:51:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 62916 00:05:21.259 15:51:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:21.517 15:51:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 62916 00:05:21.517 15:51:15 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 62916 ']' 00:05:21.517 15:51:15 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 62916 00:05:21.517 15:51:15 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:21.517 15:51:15 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:21.517 15:51:15 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62916 00:05:21.517 15:51:15 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:21.517 15:51:15 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:21.517 killing process with pid 62916 00:05:21.518 15:51:15 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62916' 00:05:21.518 15:51:15 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 62916 00:05:21.518 15:51:15 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 62916 00:05:22.085 15:51:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 62916 00:05:22.085 15:51:15 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:22.085 15:51:15 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 62916 00:05:22.085 15:51:15 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:22.085 15:51:15 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.085 15:51:15 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:22.085 15:51:15 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.085 15:51:15 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 62916 00:05:22.085 15:51:15 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 62916 ']' 00:05:22.085 15:51:15 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.085 15:51:15 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.085 15:51:15 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.085 15:51:15 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.085 15:51:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.085 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (62916) - No such process 00:05:22.085 ERROR: process (pid: 62916) is no longer running 00:05:22.085 15:51:15 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.085 15:51:15 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:22.085 15:51:15 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:22.085 15:51:15 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:22.085 15:51:15 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:22.085 15:51:15 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:22.085 15:51:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:22.085 15:51:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:22.085 15:51:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:22.085 15:51:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:22.085 00:05:22.085 real 0m2.123s 00:05:22.085 user 0m2.273s 00:05:22.085 sys 0m0.639s 00:05:22.085 15:51:15 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.085 15:51:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.085 ************************************ 00:05:22.085 END TEST default_locks 00:05:22.085 ************************************ 00:05:22.085 15:51:15 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:22.086 15:51:15 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:22.086 15:51:15 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.086 15:51:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.086 15:51:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.086 ************************************ 00:05:22.086 START TEST default_locks_via_rpc 00:05:22.086 ************************************ 00:05:22.086 15:51:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:22.086 15:51:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=62980 00:05:22.086 15:51:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:22.086 15:51:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 62980 00:05:22.086 15:51:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 62980 ']' 00:05:22.086 15:51:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.086 15:51:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.086 15:51:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.086 15:51:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.086 15:51:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.362 [2024-07-15 15:51:15.815606] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:05:22.362 [2024-07-15 15:51:15.815741] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62980 ] 00:05:22.362 [2024-07-15 15:51:15.957547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.623 [2024-07-15 15:51:16.082502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.189 15:51:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.189 15:51:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:23.189 15:51:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:23.189 15:51:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.189 15:51:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.189 15:51:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.189 15:51:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:23.189 15:51:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:23.189 15:51:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:23.189 15:51:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:23.189 15:51:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:23.189 15:51:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.189 15:51:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.189 15:51:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.189 15:51:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 62980 00:05:23.189 15:51:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 62980 00:05:23.189 15:51:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:23.757 15:51:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 62980 00:05:23.757 15:51:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 62980 ']' 00:05:23.757 15:51:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 62980 00:05:23.757 15:51:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:23.757 15:51:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:23.757 15:51:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62980 00:05:23.757 15:51:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:23.757 15:51:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:23.757 killing process with pid 62980 00:05:23.757 15:51:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62980' 00:05:23.757 15:51:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 62980 00:05:23.757 15:51:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 62980 00:05:24.015 00:05:24.015 real 0m1.986s 00:05:24.015 user 0m2.151s 00:05:24.015 sys 0m0.609s 00:05:24.015 15:51:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.015 ************************************ 00:05:24.015 END TEST default_locks_via_rpc 00:05:24.015 ************************************ 00:05:24.015 15:51:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.274 15:51:17 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:24.274 15:51:17 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:24.274 15:51:17 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.274 15:51:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.274 15:51:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.274 ************************************ 00:05:24.274 START TEST non_locking_app_on_locked_coremask 00:05:24.274 ************************************ 00:05:24.274 15:51:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:24.274 15:51:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=63049 00:05:24.274 15:51:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 63049 /var/tmp/spdk.sock 00:05:24.274 15:51:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63049 ']' 00:05:24.274 15:51:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.274 15:51:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:24.274 15:51:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.274 15:51:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.274 15:51:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.274 15:51:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.274 [2024-07-15 15:51:17.860105] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:05:24.274 [2024-07-15 15:51:17.860236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63049 ] 00:05:24.274 [2024-07-15 15:51:18.002040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.532 [2024-07-15 15:51:18.125671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.467 15:51:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.467 15:51:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:25.467 15:51:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=63077 00:05:25.467 15:51:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:25.467 15:51:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 63077 /var/tmp/spdk2.sock 00:05:25.467 15:51:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63077 ']' 00:05:25.467 15:51:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:25.467 15:51:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:25.467 15:51:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:25.467 15:51:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.467 15:51:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.467 [2024-07-15 15:51:18.973824] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:05:25.467 [2024-07-15 15:51:18.973913] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63077 ] 00:05:25.467 [2024-07-15 15:51:19.119550] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:25.467 [2024-07-15 15:51:19.119619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.725 [2024-07-15 15:51:19.367984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.660 15:51:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.660 15:51:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:26.660 15:51:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 63049 00:05:26.660 15:51:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63049 00:05:26.660 15:51:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:27.228 15:51:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 63049 00:05:27.228 15:51:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63049 ']' 00:05:27.228 15:51:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 63049 00:05:27.228 15:51:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:27.228 15:51:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:27.228 15:51:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63049 00:05:27.228 15:51:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:27.228 15:51:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:27.228 15:51:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63049' 00:05:27.228 killing process with pid 63049 00:05:27.228 15:51:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 63049 00:05:27.228 15:51:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 63049 00:05:28.165 15:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 63077 00:05:28.165 15:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63077 ']' 00:05:28.165 15:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 63077 00:05:28.165 15:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:28.165 15:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:28.165 15:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63077 00:05:28.165 15:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:28.165 15:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:28.165 killing process with pid 63077 00:05:28.165 15:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63077' 00:05:28.165 15:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 63077 00:05:28.165 15:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 63077 00:05:28.732 00:05:28.732 real 0m4.454s 00:05:28.732 user 0m5.019s 00:05:28.732 sys 0m1.219s 00:05:28.732 15:51:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.732 15:51:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.732 ************************************ 00:05:28.732 END TEST non_locking_app_on_locked_coremask 00:05:28.732 ************************************ 00:05:28.732 15:51:22 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:28.732 15:51:22 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:28.732 15:51:22 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.732 15:51:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.732 15:51:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.732 ************************************ 00:05:28.732 START TEST locking_app_on_unlocked_coremask 00:05:28.732 ************************************ 00:05:28.732 15:51:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:28.732 15:51:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=63156 00:05:28.732 15:51:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 63156 /var/tmp/spdk.sock 00:05:28.732 15:51:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63156 ']' 00:05:28.732 15:51:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:28.732 15:51:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.732 15:51:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.732 15:51:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.732 15:51:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.732 15:51:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.732 [2024-07-15 15:51:22.354686] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:05:28.732 [2024-07-15 15:51:22.354800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63156 ] 00:05:28.991 [2024-07-15 15:51:22.495096] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:28.991 [2024-07-15 15:51:22.495162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.991 [2024-07-15 15:51:22.613617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.925 15:51:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.925 15:51:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:29.925 15:51:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=63184 00:05:29.925 15:51:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:29.925 15:51:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 63184 /var/tmp/spdk2.sock 00:05:29.925 15:51:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63184 ']' 00:05:29.925 15:51:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:29.925 15:51:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:29.925 15:51:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:29.925 15:51:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.925 15:51:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.925 [2024-07-15 15:51:23.416298] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:05:29.925 [2024-07-15 15:51:23.416869] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63184 ] 00:05:29.925 [2024-07-15 15:51:23.564399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.183 [2024-07-15 15:51:23.821893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.750 15:51:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.750 15:51:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:30.750 15:51:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 63184 00:05:30.750 15:51:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63184 00:05:30.750 15:51:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:31.686 15:51:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 63156 00:05:31.686 15:51:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63156 ']' 00:05:31.686 15:51:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 63156 00:05:31.686 15:51:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:31.686 15:51:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:31.686 15:51:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63156 00:05:31.686 15:51:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:31.686 15:51:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:31.686 killing process with pid 63156 00:05:31.686 15:51:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63156' 00:05:31.686 15:51:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 63156 00:05:31.686 15:51:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 63156 00:05:32.621 15:51:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 63184 00:05:32.621 15:51:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63184 ']' 00:05:32.621 15:51:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 63184 00:05:32.621 15:51:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:32.621 15:51:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:32.621 15:51:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63184 00:05:32.621 15:51:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:32.621 15:51:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:32.621 killing process with pid 63184 00:05:32.621 15:51:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63184' 00:05:32.621 15:51:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 63184 00:05:32.621 15:51:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 63184 00:05:32.880 00:05:32.880 real 0m4.279s 00:05:32.880 user 0m4.732s 00:05:32.880 sys 0m1.193s 00:05:32.880 15:51:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.880 15:51:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.880 ************************************ 00:05:32.880 END TEST locking_app_on_unlocked_coremask 00:05:32.880 ************************************ 00:05:32.880 15:51:26 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:32.880 15:51:26 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:32.880 15:51:26 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.880 15:51:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.880 15:51:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.138 ************************************ 00:05:33.138 START TEST locking_app_on_locked_coremask 00:05:33.138 ************************************ 00:05:33.138 15:51:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:33.138 15:51:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=63269 00:05:33.138 15:51:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 63269 /var/tmp/spdk.sock 00:05:33.138 15:51:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63269 ']' 00:05:33.138 15:51:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.138 15:51:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.138 15:51:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.138 15:51:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.138 15:51:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.138 15:51:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.138 [2024-07-15 15:51:26.687441] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:05:33.138 [2024-07-15 15:51:26.687543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63269 ] 00:05:33.138 [2024-07-15 15:51:26.824466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.396 [2024-07-15 15:51:26.939323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.343 15:51:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.343 15:51:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:34.343 15:51:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=63297 00:05:34.343 15:51:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 63297 /var/tmp/spdk2.sock 00:05:34.343 15:51:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:34.343 15:51:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:34.343 15:51:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63297 /var/tmp/spdk2.sock 00:05:34.343 15:51:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:34.343 15:51:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:34.344 15:51:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:34.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:34.344 15:51:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:34.344 15:51:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 63297 /var/tmp/spdk2.sock 00:05:34.344 15:51:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63297 ']' 00:05:34.344 15:51:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:34.344 15:51:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.344 15:51:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:34.344 15:51:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.344 15:51:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.344 [2024-07-15 15:51:27.787497] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:05:34.344 [2024-07-15 15:51:27.787614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63297 ] 00:05:34.344 [2024-07-15 15:51:27.933296] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 63269 has claimed it. 00:05:34.344 [2024-07-15 15:51:27.933402] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:34.910 ERROR: process (pid: 63297) is no longer running 00:05:34.910 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (63297) - No such process 00:05:34.910 15:51:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.910 15:51:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:34.910 15:51:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:34.910 15:51:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:34.910 15:51:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:34.910 15:51:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:34.910 15:51:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 63269 00:05:34.910 15:51:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63269 00:05:34.910 15:51:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:35.475 15:51:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 63269 00:05:35.475 15:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63269 ']' 00:05:35.475 15:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 63269 00:05:35.475 15:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:35.475 15:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:35.475 15:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63269 00:05:35.475 killing process with pid 63269 00:05:35.475 15:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:35.475 15:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:35.475 15:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63269' 00:05:35.475 15:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 63269 00:05:35.475 15:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 63269 00:05:35.732 ************************************ 00:05:35.732 END TEST locking_app_on_locked_coremask 00:05:35.732 ************************************ 00:05:35.732 00:05:35.732 real 0m2.838s 00:05:35.732 user 0m3.326s 00:05:35.732 sys 0m0.699s 00:05:35.732 15:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.732 15:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.989 15:51:29 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:35.989 15:51:29 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:35.989 15:51:29 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.989 15:51:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.989 15:51:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.989 ************************************ 00:05:35.989 START TEST locking_overlapped_coremask 00:05:35.989 ************************************ 00:05:35.989 15:51:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:35.989 15:51:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=63348 00:05:35.989 15:51:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:35.989 15:51:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 63348 /var/tmp/spdk.sock 00:05:35.989 15:51:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 63348 ']' 00:05:35.989 15:51:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.989 15:51:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.989 15:51:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.989 15:51:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.989 15:51:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.989 [2024-07-15 15:51:29.576688] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:05:35.989 [2024-07-15 15:51:29.576808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63348 ] 00:05:35.989 [2024-07-15 15:51:29.713163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:36.247 [2024-07-15 15:51:29.851748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.247 [2024-07-15 15:51:29.851900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.247 [2024-07-15 15:51:29.851908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.176 15:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.176 15:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:37.176 15:51:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=63383 00:05:37.176 15:51:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 63383 /var/tmp/spdk2.sock 00:05:37.176 15:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:37.176 15:51:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:37.176 15:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63383 /var/tmp/spdk2.sock 00:05:37.176 15:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:37.176 15:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.176 15:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:37.176 15:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.176 15:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 63383 /var/tmp/spdk2.sock 00:05:37.176 15:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 63383 ']' 00:05:37.176 15:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:37.176 15:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.176 15:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:37.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:37.176 15:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.176 15:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.176 [2024-07-15 15:51:30.669474] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:05:37.176 [2024-07-15 15:51:30.669588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63383 ] 00:05:37.176 [2024-07-15 15:51:30.812644] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63348 has claimed it. 00:05:37.176 [2024-07-15 15:51:30.812716] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:37.741 ERROR: process (pid: 63383) is no longer running 00:05:37.741 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (63383) - No such process 00:05:37.741 15:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.741 15:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:37.741 15:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:37.741 15:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:37.741 15:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:37.741 15:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:37.741 15:51:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:37.741 15:51:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:37.741 15:51:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:37.741 15:51:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:37.741 15:51:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 63348 00:05:37.741 15:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 63348 ']' 00:05:37.741 15:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 63348 00:05:37.741 15:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:37.741 15:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:37.741 15:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63348 00:05:37.741 15:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:37.741 15:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:37.741 killing process with pid 63348 00:05:37.741 15:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63348' 00:05:37.741 15:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 63348 00:05:37.741 15:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 63348 00:05:38.306 00:05:38.306 real 0m2.330s 00:05:38.306 user 0m6.409s 00:05:38.306 sys 0m0.487s 00:05:38.306 15:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.306 15:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.306 ************************************ 00:05:38.306 END TEST locking_overlapped_coremask 00:05:38.306 ************************************ 00:05:38.306 15:51:31 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:38.306 15:51:31 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:38.306 15:51:31 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.307 15:51:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.307 15:51:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.307 ************************************ 00:05:38.307 START TEST locking_overlapped_coremask_via_rpc 00:05:38.307 ************************************ 00:05:38.307 15:51:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:38.307 15:51:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=63430 00:05:38.307 15:51:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 63430 /var/tmp/spdk.sock 00:05:38.307 15:51:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63430 ']' 00:05:38.307 15:51:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.307 15:51:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:38.307 15:51:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.307 15:51:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.307 15:51:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.307 15:51:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.307 [2024-07-15 15:51:31.963330] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:05:38.307 [2024-07-15 15:51:31.963447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63430 ] 00:05:38.564 [2024-07-15 15:51:32.104326] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:38.564 [2024-07-15 15:51:32.104416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:38.564 [2024-07-15 15:51:32.248851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.564 [2024-07-15 15:51:32.248990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:38.564 [2024-07-15 15:51:32.248994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.525 15:51:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.525 15:51:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:39.525 15:51:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=63460 00:05:39.525 15:51:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 63460 /var/tmp/spdk2.sock 00:05:39.525 15:51:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:39.525 15:51:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63460 ']' 00:05:39.525 15:51:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:39.525 15:51:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:39.525 15:51:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:39.525 15:51:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.525 15:51:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.525 [2024-07-15 15:51:33.057380] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:05:39.525 [2024-07-15 15:51:33.057486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63460 ] 00:05:39.525 [2024-07-15 15:51:33.203594] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:39.525 [2024-07-15 15:51:33.203665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:39.804 [2024-07-15 15:51:33.450722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:39.804 [2024-07-15 15:51:33.450818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:39.804 [2024-07-15 15:51:33.450820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:40.366 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.366 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:40.366 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:40.366 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.366 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.366 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.366 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:40.366 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:40.366 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:40.366 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:40.366 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.366 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:40.366 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.366 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:40.366 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.366 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.366 [2024-07-15 15:51:34.081091] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63430 has claimed it. 00:05:40.366 2024/07/15 15:51:34 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:05:40.366 request: 00:05:40.366 { 00:05:40.366 "method": "framework_enable_cpumask_locks", 00:05:40.366 "params": {} 00:05:40.366 } 00:05:40.366 Got JSON-RPC error response 00:05:40.366 GoRPCClient: error on JSON-RPC call 00:05:40.366 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:40.366 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:40.366 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:40.366 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:40.366 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:40.366 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 63430 /var/tmp/spdk.sock 00:05:40.366 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63430 ']' 00:05:40.366 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.366 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.366 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.366 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.366 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.932 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.932 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:40.932 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 63460 /var/tmp/spdk2.sock 00:05:40.932 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63460 ']' 00:05:40.932 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:40.932 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.932 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:40.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:40.932 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.932 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.191 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.191 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:41.191 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:41.191 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:41.191 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:41.191 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:41.191 00:05:41.191 real 0m2.810s 00:05:41.191 user 0m1.491s 00:05:41.191 sys 0m0.261s 00:05:41.191 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.191 15:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.191 ************************************ 00:05:41.191 END TEST locking_overlapped_coremask_via_rpc 00:05:41.191 ************************************ 00:05:41.191 15:51:34 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:41.191 15:51:34 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:41.191 15:51:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63430 ]] 00:05:41.191 15:51:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63430 00:05:41.191 15:51:34 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63430 ']' 00:05:41.191 15:51:34 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63430 00:05:41.191 15:51:34 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:41.191 15:51:34 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:41.191 15:51:34 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63430 00:05:41.191 15:51:34 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:41.191 15:51:34 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:41.191 killing process with pid 63430 00:05:41.191 15:51:34 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63430' 00:05:41.191 15:51:34 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 63430 00:05:41.191 15:51:34 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 63430 00:05:41.449 15:51:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63460 ]] 00:05:41.449 15:51:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63460 00:05:41.449 15:51:35 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63460 ']' 00:05:41.449 15:51:35 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63460 00:05:41.449 15:51:35 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:41.449 15:51:35 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:41.449 15:51:35 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63460 00:05:41.449 15:51:35 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:41.449 15:51:35 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:41.449 15:51:35 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63460' 00:05:41.449 killing process with pid 63460 00:05:41.449 15:51:35 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 63460 00:05:41.449 15:51:35 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 63460 00:05:42.014 15:51:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:42.015 15:51:35 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:42.015 15:51:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63430 ]] 00:05:42.015 15:51:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63430 00:05:42.015 15:51:35 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63430 ']' 00:05:42.015 15:51:35 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63430 00:05:42.015 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (63430) - No such process 00:05:42.015 Process with pid 63430 is not found 00:05:42.015 15:51:35 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 63430 is not found' 00:05:42.015 15:51:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63460 ]] 00:05:42.015 15:51:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63460 00:05:42.015 15:51:35 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63460 ']' 00:05:42.015 15:51:35 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63460 00:05:42.015 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (63460) - No such process 00:05:42.015 Process with pid 63460 is not found 00:05:42.015 15:51:35 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 63460 is not found' 00:05:42.015 15:51:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:42.015 ************************************ 00:05:42.015 END TEST cpu_locks 00:05:42.015 ************************************ 00:05:42.015 00:05:42.015 real 0m22.126s 00:05:42.015 user 0m38.273s 00:05:42.015 sys 0m5.988s 00:05:42.015 15:51:35 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.015 15:51:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.015 15:51:35 event -- common/autotest_common.sh@1142 -- # return 0 00:05:42.015 00:05:42.015 real 0m51.156s 00:05:42.015 user 1m38.071s 00:05:42.015 sys 0m10.064s 00:05:42.015 15:51:35 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.015 15:51:35 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.015 ************************************ 00:05:42.015 END TEST event 00:05:42.015 ************************************ 00:05:42.015 15:51:35 -- common/autotest_common.sh@1142 -- # return 0 00:05:42.015 15:51:35 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:42.015 15:51:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.015 15:51:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.015 15:51:35 -- common/autotest_common.sh@10 -- # set +x 00:05:42.015 ************************************ 00:05:42.015 START TEST thread 00:05:42.015 ************************************ 00:05:42.015 15:51:35 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:42.273 * Looking for test storage... 00:05:42.273 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:42.273 15:51:35 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:42.273 15:51:35 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:42.273 15:51:35 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.273 15:51:35 thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.273 ************************************ 00:05:42.273 START TEST thread_poller_perf 00:05:42.273 ************************************ 00:05:42.273 15:51:35 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:42.273 [2024-07-15 15:51:35.795124] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:05:42.273 [2024-07-15 15:51:35.795225] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63612 ] 00:05:42.273 [2024-07-15 15:51:35.934670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.531 [2024-07-15 15:51:36.062224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.531 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:43.464 ====================================== 00:05:43.464 busy:2211647132 (cyc) 00:05:43.464 total_run_count: 291000 00:05:43.464 tsc_hz: 2200000000 (cyc) 00:05:43.464 ====================================== 00:05:43.464 poller_cost: 7600 (cyc), 3454 (nsec) 00:05:43.464 00:05:43.464 real 0m1.386s 00:05:43.464 user 0m1.221s 00:05:43.464 sys 0m0.056s 00:05:43.464 15:51:37 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.464 15:51:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:43.464 ************************************ 00:05:43.464 END TEST thread_poller_perf 00:05:43.464 ************************************ 00:05:43.722 15:51:37 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:43.722 15:51:37 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:43.722 15:51:37 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:43.722 15:51:37 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.722 15:51:37 thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.722 ************************************ 00:05:43.722 START TEST thread_poller_perf 00:05:43.722 ************************************ 00:05:43.722 15:51:37 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:43.722 [2024-07-15 15:51:37.237810] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:05:43.722 [2024-07-15 15:51:37.237923] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63642 ] 00:05:43.722 [2024-07-15 15:51:37.377035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.980 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:43.980 [2024-07-15 15:51:37.519850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.913 ====================================== 00:05:44.913 busy:2202537283 (cyc) 00:05:44.913 total_run_count: 3910000 00:05:44.913 tsc_hz: 2200000000 (cyc) 00:05:44.913 ====================================== 00:05:44.913 poller_cost: 563 (cyc), 255 (nsec) 00:05:44.913 00:05:44.913 real 0m1.399s 00:05:44.913 user 0m1.226s 00:05:44.913 sys 0m0.064s 00:05:44.913 15:51:38 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.913 15:51:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:44.913 ************************************ 00:05:44.913 END TEST thread_poller_perf 00:05:44.913 ************************************ 00:05:45.172 15:51:38 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:45.172 15:51:38 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:45.172 00:05:45.172 real 0m2.984s 00:05:45.172 user 0m2.502s 00:05:45.172 sys 0m0.253s 00:05:45.172 15:51:38 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.172 15:51:38 thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.172 ************************************ 00:05:45.172 END TEST thread 00:05:45.172 ************************************ 00:05:45.172 15:51:38 -- common/autotest_common.sh@1142 -- # return 0 00:05:45.172 15:51:38 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:45.172 15:51:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.172 15:51:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.172 15:51:38 -- common/autotest_common.sh@10 -- # set +x 00:05:45.172 ************************************ 00:05:45.172 START TEST accel 00:05:45.172 ************************************ 00:05:45.172 15:51:38 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:45.172 * Looking for test storage... 00:05:45.172 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:45.172 15:51:38 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:45.172 15:51:38 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:45.172 15:51:38 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:45.172 15:51:38 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=63722 00:05:45.172 15:51:38 accel -- accel/accel.sh@63 -- # waitforlisten 63722 00:05:45.172 15:51:38 accel -- common/autotest_common.sh@829 -- # '[' -z 63722 ']' 00:05:45.172 15:51:38 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.172 15:51:38 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.172 15:51:38 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.172 15:51:38 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.172 15:51:38 accel -- common/autotest_common.sh@10 -- # set +x 00:05:45.172 15:51:38 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:45.172 15:51:38 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:45.172 15:51:38 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:45.172 15:51:38 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:45.172 15:51:38 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.172 15:51:38 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.172 15:51:38 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:45.172 15:51:38 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:45.172 15:51:38 accel -- accel/accel.sh@41 -- # jq -r . 00:05:45.172 [2024-07-15 15:51:38.846814] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:05:45.172 [2024-07-15 15:51:38.846919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63722 ] 00:05:45.430 [2024-07-15 15:51:38.984654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.430 [2024-07-15 15:51:39.098112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.370 15:51:39 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.370 15:51:39 accel -- common/autotest_common.sh@862 -- # return 0 00:05:46.370 15:51:39 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:46.370 15:51:39 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:46.370 15:51:39 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:46.370 15:51:39 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:46.370 15:51:39 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:46.370 15:51:39 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:46.370 15:51:39 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.370 15:51:39 accel -- common/autotest_common.sh@10 -- # set +x 00:05:46.370 15:51:39 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:46.370 15:51:39 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.370 15:51:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.370 15:51:39 accel -- accel/accel.sh@72 -- # IFS== 00:05:46.370 15:51:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:46.370 15:51:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:46.370 15:51:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.370 15:51:39 accel -- accel/accel.sh@72 -- # IFS== 00:05:46.370 15:51:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:46.370 15:51:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:46.370 15:51:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.370 15:51:39 accel -- accel/accel.sh@72 -- # IFS== 00:05:46.370 15:51:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:46.370 15:51:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:46.370 15:51:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.370 15:51:39 accel -- accel/accel.sh@72 -- # IFS== 00:05:46.370 15:51:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:46.370 15:51:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:46.370 15:51:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.370 15:51:39 accel -- accel/accel.sh@72 -- # IFS== 00:05:46.370 15:51:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:46.370 15:51:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:46.370 15:51:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.370 15:51:39 accel -- accel/accel.sh@72 -- # IFS== 00:05:46.370 15:51:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:46.370 15:51:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:46.370 15:51:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.370 15:51:39 accel -- accel/accel.sh@72 -- # IFS== 00:05:46.370 15:51:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:46.370 15:51:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:46.370 15:51:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.370 15:51:39 accel -- accel/accel.sh@72 -- # IFS== 00:05:46.370 15:51:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:46.370 15:51:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:46.370 15:51:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.370 15:51:39 accel -- accel/accel.sh@72 -- # IFS== 00:05:46.370 15:51:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:46.370 15:51:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:46.370 15:51:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.370 15:51:39 accel -- accel/accel.sh@72 -- # IFS== 00:05:46.370 15:51:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:46.370 15:51:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:46.370 15:51:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.370 15:51:39 accel -- accel/accel.sh@72 -- # IFS== 00:05:46.370 15:51:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:46.370 15:51:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:46.370 15:51:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.370 15:51:39 accel -- accel/accel.sh@72 -- # IFS== 00:05:46.370 15:51:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:46.370 15:51:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:46.370 15:51:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.370 15:51:39 accel -- accel/accel.sh@72 -- # IFS== 00:05:46.371 15:51:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:46.371 15:51:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:46.371 15:51:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.371 15:51:39 accel -- accel/accel.sh@72 -- # IFS== 00:05:46.371 15:51:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:46.371 15:51:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:46.371 15:51:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.371 15:51:39 accel -- accel/accel.sh@72 -- # IFS== 00:05:46.371 15:51:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:46.371 15:51:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:46.371 15:51:39 accel -- accel/accel.sh@75 -- # killprocess 63722 00:05:46.371 15:51:39 accel -- common/autotest_common.sh@948 -- # '[' -z 63722 ']' 00:05:46.371 15:51:39 accel -- common/autotest_common.sh@952 -- # kill -0 63722 00:05:46.371 15:51:39 accel -- common/autotest_common.sh@953 -- # uname 00:05:46.371 15:51:39 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:46.371 15:51:39 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63722 00:05:46.371 15:51:39 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:46.371 15:51:39 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:46.371 killing process with pid 63722 00:05:46.371 15:51:39 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63722' 00:05:46.371 15:51:39 accel -- common/autotest_common.sh@967 -- # kill 63722 00:05:46.371 15:51:39 accel -- common/autotest_common.sh@972 -- # wait 63722 00:05:46.636 15:51:40 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:46.636 15:51:40 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:46.636 15:51:40 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:46.636 15:51:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.636 15:51:40 accel -- common/autotest_common.sh@10 -- # set +x 00:05:46.636 15:51:40 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:46.636 15:51:40 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:46.636 15:51:40 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:46.636 15:51:40 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:46.636 15:51:40 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:46.636 15:51:40 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.636 15:51:40 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.636 15:51:40 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:46.636 15:51:40 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:46.636 15:51:40 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:46.895 15:51:40 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.895 15:51:40 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:46.895 15:51:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:46.895 15:51:40 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:46.895 15:51:40 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:46.895 15:51:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.895 15:51:40 accel -- common/autotest_common.sh@10 -- # set +x 00:05:46.895 ************************************ 00:05:46.895 START TEST accel_missing_filename 00:05:46.895 ************************************ 00:05:46.895 15:51:40 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:46.895 15:51:40 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:46.895 15:51:40 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:46.895 15:51:40 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:46.895 15:51:40 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:46.895 15:51:40 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:46.895 15:51:40 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:46.895 15:51:40 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:46.895 15:51:40 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:46.895 15:51:40 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:46.895 15:51:40 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:46.895 15:51:40 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:46.895 15:51:40 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.895 15:51:40 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.895 15:51:40 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:46.895 15:51:40 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:46.895 15:51:40 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:46.895 [2024-07-15 15:51:40.454607] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:05:46.895 [2024-07-15 15:51:40.455262] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63786 ] 00:05:46.895 [2024-07-15 15:51:40.590933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.153 [2024-07-15 15:51:40.689411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.153 [2024-07-15 15:51:40.749383] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:47.153 [2024-07-15 15:51:40.827026] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:47.411 A filename is required. 00:05:47.411 15:51:40 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:47.411 15:51:40 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:47.411 15:51:40 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:47.411 15:51:40 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:47.411 15:51:40 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:47.411 15:51:40 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:47.411 00:05:47.411 real 0m0.487s 00:05:47.411 user 0m0.309s 00:05:47.411 sys 0m0.116s 00:05:47.411 15:51:40 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.411 15:51:40 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:47.411 ************************************ 00:05:47.411 END TEST accel_missing_filename 00:05:47.411 ************************************ 00:05:47.411 15:51:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:47.411 15:51:40 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:47.411 15:51:40 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:47.411 15:51:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.411 15:51:40 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.411 ************************************ 00:05:47.411 START TEST accel_compress_verify 00:05:47.411 ************************************ 00:05:47.411 15:51:40 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:47.411 15:51:40 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:47.411 15:51:40 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:47.411 15:51:40 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:47.411 15:51:40 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.411 15:51:40 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:47.411 15:51:40 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.411 15:51:40 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:47.411 15:51:40 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:47.411 15:51:40 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:47.411 15:51:40 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.411 15:51:40 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.411 15:51:40 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.411 15:51:40 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.411 15:51:40 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.411 15:51:40 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:47.411 15:51:40 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:47.411 [2024-07-15 15:51:40.986334] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:05:47.411 [2024-07-15 15:51:40.986415] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63816 ] 00:05:47.411 [2024-07-15 15:51:41.123439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.669 [2024-07-15 15:51:41.263162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.669 [2024-07-15 15:51:41.325546] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:47.928 [2024-07-15 15:51:41.409388] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:47.928 00:05:47.928 Compression does not support the verify option, aborting. 00:05:47.928 15:51:41 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:47.928 15:51:41 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:47.928 15:51:41 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:47.928 15:51:41 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:47.928 15:51:41 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:47.928 15:51:41 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:47.928 00:05:47.928 real 0m0.539s 00:05:47.928 user 0m0.370s 00:05:47.928 sys 0m0.116s 00:05:47.928 ************************************ 00:05:47.928 END TEST accel_compress_verify 00:05:47.928 ************************************ 00:05:47.928 15:51:41 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.928 15:51:41 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:47.928 15:51:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:47.928 15:51:41 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:47.928 15:51:41 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:47.928 15:51:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.928 15:51:41 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.928 ************************************ 00:05:47.928 START TEST accel_wrong_workload 00:05:47.928 ************************************ 00:05:47.928 15:51:41 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:47.928 15:51:41 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:47.928 15:51:41 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:47.928 15:51:41 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:47.928 15:51:41 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.928 15:51:41 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:47.928 15:51:41 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.928 15:51:41 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:47.928 15:51:41 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:47.928 15:51:41 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:47.928 15:51:41 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.928 15:51:41 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.928 15:51:41 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.928 15:51:41 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.928 15:51:41 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.928 15:51:41 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:47.928 15:51:41 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:47.928 Unsupported workload type: foobar 00:05:47.928 [2024-07-15 15:51:41.582500] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:47.928 accel_perf options: 00:05:47.928 [-h help message] 00:05:47.928 [-q queue depth per core] 00:05:47.928 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:47.928 [-T number of threads per core 00:05:47.928 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:47.928 [-t time in seconds] 00:05:47.928 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:47.928 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:47.928 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:47.928 [-l for compress/decompress workloads, name of uncompressed input file 00:05:47.928 [-S for crc32c workload, use this seed value (default 0) 00:05:47.928 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:47.928 [-f for fill workload, use this BYTE value (default 255) 00:05:47.928 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:47.928 [-y verify result if this switch is on] 00:05:47.928 [-a tasks to allocate per core (default: same value as -q)] 00:05:47.928 Can be used to spread operations across a wider range of memory. 00:05:47.928 15:51:41 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:47.928 15:51:41 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:47.928 15:51:41 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:47.928 15:51:41 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:47.928 00:05:47.928 real 0m0.031s 00:05:47.928 user 0m0.017s 00:05:47.928 sys 0m0.013s 00:05:47.928 15:51:41 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.928 ************************************ 00:05:47.928 END TEST accel_wrong_workload 00:05:47.928 ************************************ 00:05:47.928 15:51:41 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:47.928 15:51:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:47.928 15:51:41 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:47.928 15:51:41 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:47.928 15:51:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.928 15:51:41 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.928 ************************************ 00:05:47.928 START TEST accel_negative_buffers 00:05:47.928 ************************************ 00:05:47.928 15:51:41 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:47.928 15:51:41 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:47.928 15:51:41 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:47.928 15:51:41 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:47.928 15:51:41 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.928 15:51:41 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:47.928 15:51:41 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.928 15:51:41 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:47.928 15:51:41 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:47.928 15:51:41 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:47.928 15:51:41 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.928 15:51:41 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.928 15:51:41 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.928 15:51:41 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.928 15:51:41 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.928 15:51:41 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:47.928 15:51:41 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:48.187 -x option must be non-negative. 00:05:48.187 [2024-07-15 15:51:41.660360] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:48.187 accel_perf options: 00:05:48.187 [-h help message] 00:05:48.187 [-q queue depth per core] 00:05:48.187 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:48.187 [-T number of threads per core 00:05:48.187 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:48.187 [-t time in seconds] 00:05:48.187 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:48.187 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:48.187 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:48.187 [-l for compress/decompress workloads, name of uncompressed input file 00:05:48.187 [-S for crc32c workload, use this seed value (default 0) 00:05:48.187 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:48.187 [-f for fill workload, use this BYTE value (default 255) 00:05:48.187 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:48.187 [-y verify result if this switch is on] 00:05:48.187 [-a tasks to allocate per core (default: same value as -q)] 00:05:48.187 Can be used to spread operations across a wider range of memory. 00:05:48.187 ************************************ 00:05:48.187 END TEST accel_negative_buffers 00:05:48.187 ************************************ 00:05:48.187 15:51:41 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:48.187 15:51:41 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:48.187 15:51:41 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:48.187 15:51:41 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:48.187 00:05:48.187 real 0m0.029s 00:05:48.187 user 0m0.018s 00:05:48.187 sys 0m0.010s 00:05:48.187 15:51:41 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.187 15:51:41 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:48.187 15:51:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:48.187 15:51:41 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:48.187 15:51:41 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:48.187 15:51:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.187 15:51:41 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.187 ************************************ 00:05:48.187 START TEST accel_crc32c 00:05:48.187 ************************************ 00:05:48.187 15:51:41 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:48.187 15:51:41 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:48.187 15:51:41 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:48.187 15:51:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.187 15:51:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.187 15:51:41 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:48.187 15:51:41 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:48.187 15:51:41 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:48.187 15:51:41 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.187 15:51:41 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.187 15:51:41 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.187 15:51:41 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.187 15:51:41 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.187 15:51:41 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:48.187 15:51:41 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:48.187 [2024-07-15 15:51:41.737329] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:05:48.187 [2024-07-15 15:51:41.737447] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63869 ] 00:05:48.187 [2024-07-15 15:51:41.875464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.446 [2024-07-15 15:51:42.019291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.446 15:51:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.820 15:51:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.820 15:51:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.820 15:51:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.820 15:51:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.820 15:51:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.820 15:51:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.820 15:51:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.820 15:51:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.820 15:51:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.820 15:51:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.820 15:51:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.820 15:51:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.820 15:51:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.820 15:51:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.820 15:51:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.820 15:51:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.820 15:51:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.820 15:51:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.820 15:51:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.820 15:51:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.820 15:51:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.820 15:51:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.820 15:51:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.820 15:51:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.820 15:51:43 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:49.820 15:51:43 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:49.820 15:51:43 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.820 00:05:49.820 real 0m1.549s 00:05:49.820 user 0m1.328s 00:05:49.820 sys 0m0.127s 00:05:49.820 15:51:43 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.820 15:51:43 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:49.820 ************************************ 00:05:49.820 END TEST accel_crc32c 00:05:49.820 ************************************ 00:05:49.820 15:51:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:49.820 15:51:43 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:49.820 15:51:43 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:49.820 15:51:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.820 15:51:43 accel -- common/autotest_common.sh@10 -- # set +x 00:05:49.820 ************************************ 00:05:49.820 START TEST accel_crc32c_C2 00:05:49.820 ************************************ 00:05:49.820 15:51:43 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:49.820 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:49.820 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:49.820 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:49.820 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:49.820 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:49.820 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:49.820 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:49.820 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:49.820 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:49.820 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.820 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.820 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:49.820 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:49.820 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:49.820 [2024-07-15 15:51:43.338930] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:05:49.820 [2024-07-15 15:51:43.339043] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63909 ] 00:05:49.820 [2024-07-15 15:51:43.470651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.078 [2024-07-15 15:51:43.574919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.078 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.078 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.078 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.078 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.078 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.078 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.078 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.078 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.078 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:50.078 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.078 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.078 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.078 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.078 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.078 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.079 15:51:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.453 15:51:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:51.453 15:51:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.453 15:51:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.453 15:51:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.453 15:51:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:51.453 15:51:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.453 15:51:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.453 15:51:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.453 15:51:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:51.453 15:51:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.453 15:51:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.453 15:51:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.453 15:51:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:51.453 15:51:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.453 15:51:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.453 15:51:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.453 15:51:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:51.453 15:51:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.453 15:51:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.453 15:51:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.453 15:51:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:51.453 15:51:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.453 15:51:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.453 15:51:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.453 15:51:44 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:51.453 15:51:44 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:51.453 15:51:44 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.453 00:05:51.453 real 0m1.495s 00:05:51.453 user 0m1.279s 00:05:51.453 sys 0m0.123s 00:05:51.453 15:51:44 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.453 15:51:44 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:51.453 ************************************ 00:05:51.453 END TEST accel_crc32c_C2 00:05:51.453 ************************************ 00:05:51.453 15:51:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:51.453 15:51:44 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:51.453 15:51:44 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:51.453 15:51:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.453 15:51:44 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.453 ************************************ 00:05:51.453 START TEST accel_copy 00:05:51.453 ************************************ 00:05:51.453 15:51:44 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:51.453 15:51:44 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:51.453 15:51:44 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:51.453 15:51:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.453 15:51:44 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:51.453 15:51:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.453 15:51:44 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:51.453 15:51:44 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:51.453 15:51:44 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.453 15:51:44 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.453 15:51:44 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.453 15:51:44 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.453 15:51:44 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.453 15:51:44 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:51.453 15:51:44 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:51.453 [2024-07-15 15:51:44.887127] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:05:51.453 [2024-07-15 15:51:44.887268] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63938 ] 00:05:51.453 [2024-07-15 15:51:45.027626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.453 [2024-07-15 15:51:45.158419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.711 15:51:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.712 15:51:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.087 15:51:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:53.087 15:51:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.087 15:51:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.087 15:51:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.087 15:51:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:53.087 15:51:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.087 15:51:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.087 15:51:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.087 15:51:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:53.087 15:51:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.087 15:51:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.087 15:51:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.087 15:51:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:53.087 15:51:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.087 15:51:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.087 15:51:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.087 15:51:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:53.087 15:51:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.087 15:51:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.087 15:51:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.087 15:51:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:53.087 15:51:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.087 15:51:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.087 15:51:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.087 15:51:46 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:53.087 15:51:46 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:53.087 15:51:46 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.087 00:05:53.087 real 0m1.531s 00:05:53.087 user 0m0.016s 00:05:53.087 sys 0m0.002s 00:05:53.087 15:51:46 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.087 15:51:46 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:53.087 ************************************ 00:05:53.088 END TEST accel_copy 00:05:53.088 ************************************ 00:05:53.088 15:51:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:53.088 15:51:46 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:53.088 15:51:46 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:53.088 15:51:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.088 15:51:46 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.088 ************************************ 00:05:53.088 START TEST accel_fill 00:05:53.088 ************************************ 00:05:53.088 15:51:46 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:53.088 [2024-07-15 15:51:46.452118] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:05:53.088 [2024-07-15 15:51:46.452208] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63978 ] 00:05:53.088 [2024-07-15 15:51:46.583898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.088 [2024-07-15 15:51:46.702408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.088 15:51:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:54.462 15:51:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:54.462 15:51:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:54.462 15:51:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:54.462 15:51:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:54.462 15:51:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:54.462 15:51:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:54.462 15:51:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:54.462 15:51:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:54.462 15:51:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:54.462 15:51:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:54.462 15:51:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:54.462 15:51:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:54.462 15:51:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:54.462 15:51:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:54.462 15:51:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:54.462 15:51:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:54.462 15:51:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:54.462 15:51:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:54.462 15:51:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:54.462 15:51:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:54.462 15:51:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:54.462 15:51:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:54.462 15:51:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:54.462 15:51:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:54.462 15:51:47 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:54.462 15:51:47 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:54.462 15:51:47 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.462 00:05:54.462 real 0m1.505s 00:05:54.462 user 0m1.306s 00:05:54.462 sys 0m0.104s 00:05:54.462 15:51:47 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.462 15:51:47 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:54.462 ************************************ 00:05:54.462 END TEST accel_fill 00:05:54.462 ************************************ 00:05:54.462 15:51:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:54.462 15:51:47 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:54.462 15:51:47 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:54.462 15:51:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.462 15:51:47 accel -- common/autotest_common.sh@10 -- # set +x 00:05:54.462 ************************************ 00:05:54.462 START TEST accel_copy_crc32c 00:05:54.462 ************************************ 00:05:54.462 15:51:47 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:54.462 15:51:47 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:54.462 15:51:47 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:54.462 15:51:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.462 15:51:47 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:54.463 15:51:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.463 15:51:47 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:54.463 15:51:47 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:54.463 15:51:47 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.463 15:51:47 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.463 15:51:47 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.463 15:51:47 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.463 15:51:47 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.463 15:51:47 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:54.463 15:51:47 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:54.463 [2024-07-15 15:51:48.013406] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:05:54.463 [2024-07-15 15:51:48.013531] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64013 ] 00:05:54.463 [2024-07-15 15:51:48.147863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.722 [2024-07-15 15:51:48.267839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.722 15:51:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.095 15:51:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:56.095 15:51:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.095 15:51:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.095 15:51:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.095 15:51:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:56.095 15:51:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.095 15:51:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.095 15:51:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.095 15:51:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:56.095 15:51:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.095 15:51:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.095 15:51:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.095 15:51:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:56.095 15:51:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.095 15:51:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.095 15:51:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.095 15:51:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:56.095 15:51:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.095 15:51:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.095 15:51:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.095 15:51:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:56.095 15:51:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.095 15:51:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.095 15:51:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.095 15:51:49 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:56.095 15:51:49 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:56.095 15:51:49 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.095 00:05:56.095 real 0m1.514s 00:05:56.095 user 0m1.298s 00:05:56.095 sys 0m0.121s 00:05:56.095 15:51:49 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.095 15:51:49 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:56.095 ************************************ 00:05:56.095 END TEST accel_copy_crc32c 00:05:56.095 ************************************ 00:05:56.095 15:51:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:56.095 15:51:49 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:56.095 15:51:49 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:56.095 15:51:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.095 15:51:49 accel -- common/autotest_common.sh@10 -- # set +x 00:05:56.095 ************************************ 00:05:56.095 START TEST accel_copy_crc32c_C2 00:05:56.095 ************************************ 00:05:56.095 15:51:49 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:56.095 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:56.095 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:56.095 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.095 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.095 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:56.095 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:56.095 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.095 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.095 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.095 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.095 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.095 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.095 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:56.095 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:56.095 [2024-07-15 15:51:49.579376] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:05:56.095 [2024-07-15 15:51:49.579525] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64047 ] 00:05:56.095 [2024-07-15 15:51:49.727403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.355 [2024-07-15 15:51:49.851148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.355 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.356 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:56.356 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.356 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.356 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.356 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:56.356 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.356 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.356 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.356 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:56.356 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.356 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.356 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.356 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:56.356 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.356 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.356 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.356 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:56.356 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.356 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.356 15:51:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.732 15:51:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.732 15:51:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.732 15:51:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.732 15:51:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.732 15:51:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.732 15:51:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.732 15:51:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.732 15:51:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.732 15:51:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.732 15:51:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.732 15:51:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.732 15:51:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.732 15:51:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.732 15:51:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.732 15:51:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.732 15:51:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.732 15:51:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.732 15:51:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.732 15:51:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.732 15:51:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.732 15:51:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.732 15:51:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.732 15:51:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.732 15:51:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.732 15:51:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:57.732 15:51:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:57.732 15:51:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.732 00:05:57.732 real 0m1.555s 00:05:57.732 user 0m1.335s 00:05:57.732 sys 0m0.125s 00:05:57.732 15:51:51 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.732 ************************************ 00:05:57.732 END TEST accel_copy_crc32c_C2 00:05:57.732 ************************************ 00:05:57.732 15:51:51 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:57.732 15:51:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:57.732 15:51:51 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:57.732 15:51:51 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:57.732 15:51:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.732 15:51:51 accel -- common/autotest_common.sh@10 -- # set +x 00:05:57.732 ************************************ 00:05:57.732 START TEST accel_dualcast 00:05:57.732 ************************************ 00:05:57.732 15:51:51 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:57.732 15:51:51 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:57.732 15:51:51 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:57.732 15:51:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.732 15:51:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.732 15:51:51 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:57.732 15:51:51 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:57.732 15:51:51 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:57.732 15:51:51 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:57.732 15:51:51 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:57.732 15:51:51 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.732 15:51:51 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.732 15:51:51 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:57.732 15:51:51 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:57.732 15:51:51 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:57.732 [2024-07-15 15:51:51.178853] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:05:57.732 [2024-07-15 15:51:51.178981] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64082 ] 00:05:57.732 [2024-07-15 15:51:51.315468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.732 [2024-07-15 15:51:51.445019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.989 15:51:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:57.989 15:51:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.989 15:51:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.989 15:51:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.989 15:51:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:57.989 15:51:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.989 15:51:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.989 15:51:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.989 15:51:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:57.989 15:51:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.989 15:51:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.989 15:51:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.989 15:51:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:57.989 15:51:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.989 15:51:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.989 15:51:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.989 15:51:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:57.989 15:51:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.989 15:51:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.989 15:51:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.989 15:51:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:57.989 15:51:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.989 15:51:51 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:57.989 15:51:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.989 15:51:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.989 15:51:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.989 15:51:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.989 15:51:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.989 15:51:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.989 15:51:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:57.989 15:51:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.989 15:51:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.989 15:51:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.989 15:51:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:57.989 15:51:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.989 15:51:51 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:57.990 15:51:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.990 15:51:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.990 15:51:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:57.990 15:51:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.990 15:51:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.990 15:51:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.990 15:51:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:57.990 15:51:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.990 15:51:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.990 15:51:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.990 15:51:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:57.990 15:51:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.990 15:51:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.990 15:51:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.990 15:51:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:57.990 15:51:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.990 15:51:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.990 15:51:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.990 15:51:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:57.990 15:51:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.990 15:51:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.990 15:51:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.990 15:51:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:57.990 15:51:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.990 15:51:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.990 15:51:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.990 15:51:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:57.990 15:51:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.990 15:51:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.990 15:51:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:59.365 15:51:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:59.365 15:51:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:59.365 15:51:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:59.365 15:51:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:59.365 15:51:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:59.365 15:51:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:59.365 15:51:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:59.365 15:51:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:59.365 15:51:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:59.365 15:51:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:59.365 15:51:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:59.365 15:51:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:59.365 15:51:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:59.365 15:51:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:59.365 15:51:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:59.365 15:51:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:59.365 15:51:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:59.365 15:51:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:59.365 15:51:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:59.365 15:51:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:59.365 15:51:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:59.365 15:51:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:59.365 15:51:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:59.365 15:51:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:59.365 15:51:52 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:59.365 15:51:52 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:59.365 15:51:52 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.365 00:05:59.365 real 0m1.519s 00:05:59.365 user 0m1.313s 00:05:59.365 sys 0m0.112s 00:05:59.365 15:51:52 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.365 ************************************ 00:05:59.365 END TEST accel_dualcast 00:05:59.365 ************************************ 00:05:59.365 15:51:52 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:59.365 15:51:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:59.365 15:51:52 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:59.365 15:51:52 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:59.365 15:51:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.365 15:51:52 accel -- common/autotest_common.sh@10 -- # set +x 00:05:59.365 ************************************ 00:05:59.365 START TEST accel_compare 00:05:59.365 ************************************ 00:05:59.365 15:51:52 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:59.365 15:51:52 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:59.365 15:51:52 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:59.365 15:51:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:59.365 15:51:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:59.365 15:51:52 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:59.365 15:51:52 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:59.365 15:51:52 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:59.365 15:51:52 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.365 15:51:52 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.365 15:51:52 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.365 15:51:52 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.365 15:51:52 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.365 15:51:52 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:59.365 15:51:52 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:59.365 [2024-07-15 15:51:52.749263] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:05:59.365 [2024-07-15 15:51:52.749393] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64116 ] 00:05:59.365 [2024-07-15 15:51:52.890212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.365 [2024-07-15 15:51:53.029874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.365 15:51:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:59.365 15:51:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:59.365 15:51:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:59.365 15:51:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:59.365 15:51:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:59.365 15:51:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:59.365 15:51:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:59.365 15:51:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:59.365 15:51:53 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:59.365 15:51:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:59.365 15:51:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:59.365 15:51:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:59.365 15:51:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:59.365 15:51:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:59.365 15:51:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:59.365 15:51:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:59.365 15:51:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:59.624 15:51:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:00.558 15:51:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:00.558 15:51:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:00.558 15:51:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:00.558 15:51:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:00.558 15:51:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:00.558 15:51:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:00.558 15:51:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:00.558 15:51:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:00.558 15:51:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:00.558 15:51:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:00.558 15:51:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:00.558 15:51:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:00.558 15:51:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:00.558 15:51:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:00.558 15:51:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:00.558 15:51:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:00.558 15:51:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:00.558 15:51:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:00.558 15:51:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:00.558 15:51:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:00.558 15:51:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:00.558 15:51:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:00.558 15:51:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:00.558 ************************************ 00:06:00.558 END TEST accel_compare 00:06:00.558 ************************************ 00:06:00.558 15:51:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:00.558 15:51:54 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:00.558 15:51:54 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:00.558 15:51:54 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.558 00:06:00.558 real 0m1.550s 00:06:00.558 user 0m1.322s 00:06:00.558 sys 0m0.132s 00:06:00.558 15:51:54 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.558 15:51:54 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:00.816 15:51:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:00.816 15:51:54 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:00.816 15:51:54 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:00.816 15:51:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.816 15:51:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.816 ************************************ 00:06:00.816 START TEST accel_xor 00:06:00.816 ************************************ 00:06:00.816 15:51:54 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:00.816 15:51:54 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:00.816 15:51:54 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:00.816 15:51:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:00.816 15:51:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:00.816 15:51:54 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:00.816 15:51:54 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:00.816 15:51:54 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:00.816 15:51:54 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.816 15:51:54 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.816 15:51:54 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.816 15:51:54 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.816 15:51:54 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.816 15:51:54 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:00.816 15:51:54 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:00.816 [2024-07-15 15:51:54.355353] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:06:00.816 [2024-07-15 15:51:54.355484] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64153 ] 00:06:00.816 [2024-07-15 15:51:54.501896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.075 [2024-07-15 15:51:54.624008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:01.075 15:51:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.076 15:51:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.076 15:51:54 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:01.076 15:51:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.076 15:51:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.076 15:51:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.076 15:51:54 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:01.076 15:51:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.076 15:51:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.076 15:51:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.076 15:51:54 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:01.076 15:51:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.076 15:51:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.076 15:51:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.076 15:51:54 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:01.076 15:51:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.076 15:51:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.076 15:51:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.076 15:51:54 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:01.076 15:51:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.076 15:51:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.076 15:51:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.076 15:51:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:01.076 15:51:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.076 15:51:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.076 15:51:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.076 15:51:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:01.076 15:51:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.076 15:51:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.076 15:51:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.452 00:06:02.452 real 0m1.529s 00:06:02.452 user 0m1.307s 00:06:02.452 sys 0m0.126s 00:06:02.452 15:51:55 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.452 15:51:55 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:02.452 ************************************ 00:06:02.452 END TEST accel_xor 00:06:02.452 ************************************ 00:06:02.452 15:51:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:02.452 15:51:55 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:02.452 15:51:55 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:02.452 15:51:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.452 15:51:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.452 ************************************ 00:06:02.452 START TEST accel_xor 00:06:02.452 ************************************ 00:06:02.452 15:51:55 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:02.452 15:51:55 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:02.452 [2024-07-15 15:51:55.928578] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:06:02.452 [2024-07-15 15:51:55.928670] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64187 ] 00:06:02.452 [2024-07-15 15:51:56.066045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.712 [2024-07-15 15:51:56.199158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.712 15:51:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.088 15:51:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.088 15:51:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.088 15:51:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.088 15:51:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.088 15:51:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.089 15:51:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.089 15:51:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.089 15:51:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.089 15:51:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.089 15:51:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.089 15:51:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.089 15:51:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.089 15:51:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.089 15:51:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.089 15:51:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.089 15:51:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.089 15:51:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.089 15:51:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.089 15:51:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.089 15:51:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.089 15:51:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.089 15:51:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.089 15:51:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.089 15:51:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.089 15:51:57 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:04.089 15:51:57 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:04.089 15:51:57 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.089 00:06:04.089 real 0m1.540s 00:06:04.089 user 0m1.319s 00:06:04.089 sys 0m0.127s 00:06:04.089 15:51:57 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.089 15:51:57 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:04.089 ************************************ 00:06:04.089 END TEST accel_xor 00:06:04.089 ************************************ 00:06:04.089 15:51:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:04.089 15:51:57 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:04.089 15:51:57 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:04.089 15:51:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.089 15:51:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.089 ************************************ 00:06:04.089 START TEST accel_dif_verify 00:06:04.089 ************************************ 00:06:04.089 15:51:57 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:04.089 15:51:57 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:04.089 15:51:57 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:04.089 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.089 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.089 15:51:57 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:04.089 15:51:57 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:04.089 15:51:57 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:04.089 15:51:57 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.089 15:51:57 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.089 15:51:57 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.089 15:51:57 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.089 15:51:57 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.089 15:51:57 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:04.089 15:51:57 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:04.089 [2024-07-15 15:51:57.525427] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:06:04.089 [2024-07-15 15:51:57.525559] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64222 ] 00:06:04.089 [2024-07-15 15:51:57.677984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.089 [2024-07-15 15:51:57.809951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.348 15:51:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:05.352 15:51:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:05.352 15:51:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:05.352 15:51:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:05.352 15:51:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:05.352 15:51:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:05.352 15:51:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:05.352 15:51:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:05.352 15:51:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:05.352 15:51:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:05.352 15:51:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:05.352 15:51:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:05.352 15:51:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:05.352 15:51:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:05.352 15:51:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:05.352 15:51:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:05.352 15:51:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:05.352 ************************************ 00:06:05.352 END TEST accel_dif_verify 00:06:05.352 ************************************ 00:06:05.352 15:51:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:05.352 15:51:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:05.352 15:51:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:05.352 15:51:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:05.352 15:51:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:05.352 15:51:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:05.353 15:51:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:05.353 15:51:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:05.353 15:51:59 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:05.353 15:51:59 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:05.353 15:51:59 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.353 00:06:05.353 real 0m1.551s 00:06:05.353 user 0m1.317s 00:06:05.353 sys 0m0.139s 00:06:05.353 15:51:59 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.353 15:51:59 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:05.353 15:51:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:05.353 15:51:59 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:05.353 15:51:59 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:05.353 15:51:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.353 15:51:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.611 ************************************ 00:06:05.611 START TEST accel_dif_generate 00:06:05.611 ************************************ 00:06:05.611 15:51:59 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:05.611 15:51:59 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:05.611 15:51:59 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:05.611 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:05.611 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:05.611 15:51:59 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:05.611 15:51:59 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:05.611 15:51:59 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:05.611 15:51:59 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.611 15:51:59 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.611 15:51:59 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.611 15:51:59 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.611 15:51:59 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.611 15:51:59 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:05.611 15:51:59 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:05.611 [2024-07-15 15:51:59.112926] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:06:05.611 [2024-07-15 15:51:59.113031] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64256 ] 00:06:05.611 [2024-07-15 15:51:59.251505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.869 [2024-07-15 15:51:59.369458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:05.869 15:51:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:05.870 15:51:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:05.870 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:05.870 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:05.870 15:51:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:05.870 15:51:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:05.870 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:05.870 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:05.870 15:51:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:05.870 15:51:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:05.870 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:05.870 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:05.870 15:51:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:05.870 15:51:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:05.870 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:05.870 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:05.870 15:51:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:05.870 15:51:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:05.870 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:05.870 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:05.870 15:51:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:05.870 15:51:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:05.870 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:05.870 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:05.870 15:51:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:05.870 15:51:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:05.870 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:05.870 15:51:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:07.241 15:52:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:07.241 15:52:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:07.241 15:52:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:07.241 15:52:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:07.241 15:52:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:07.241 15:52:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:07.241 15:52:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:07.241 15:52:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:07.241 15:52:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:07.241 15:52:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:07.241 15:52:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:07.241 15:52:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:07.241 15:52:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:07.241 15:52:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:07.241 15:52:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:07.241 15:52:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:07.241 15:52:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:07.241 15:52:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:07.241 15:52:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:07.241 15:52:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:07.241 15:52:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:07.241 15:52:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:07.241 15:52:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:07.241 15:52:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:07.241 15:52:00 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:07.241 15:52:00 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:07.242 15:52:00 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.242 00:06:07.242 real 0m1.602s 00:06:07.242 user 0m1.384s 00:06:07.242 sys 0m0.122s 00:06:07.242 15:52:00 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.242 ************************************ 00:06:07.242 END TEST accel_dif_generate 00:06:07.242 ************************************ 00:06:07.242 15:52:00 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:07.242 15:52:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:07.242 15:52:00 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:07.242 15:52:00 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:07.242 15:52:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.242 15:52:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.242 ************************************ 00:06:07.242 START TEST accel_dif_generate_copy 00:06:07.242 ************************************ 00:06:07.242 15:52:00 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:07.242 15:52:00 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:07.242 15:52:00 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:07.242 15:52:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.242 15:52:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.242 15:52:00 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:07.242 15:52:00 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:07.242 15:52:00 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:07.242 15:52:00 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.242 15:52:00 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.242 15:52:00 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.242 15:52:00 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.242 15:52:00 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.242 15:52:00 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:07.242 15:52:00 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:07.242 [2024-07-15 15:52:00.765899] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:06:07.242 [2024-07-15 15:52:00.766072] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64295 ] 00:06:07.242 [2024-07-15 15:52:00.904943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.498 [2024-07-15 15:52:01.078405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.498 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:07.498 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.498 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.498 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.498 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:07.498 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.498 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.498 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.498 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:07.498 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.498 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.499 15:52:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.871 15:52:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:08.871 15:52:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.871 15:52:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.871 15:52:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.871 15:52:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:08.871 15:52:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.872 15:52:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.872 15:52:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.872 15:52:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:08.872 15:52:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.872 15:52:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.872 15:52:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.872 15:52:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:08.872 15:52:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.872 15:52:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.872 15:52:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.872 15:52:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:08.872 15:52:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.872 15:52:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.872 15:52:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.872 15:52:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:08.872 15:52:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.872 15:52:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.872 15:52:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.872 15:52:02 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:08.872 15:52:02 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:08.872 15:52:02 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.872 00:06:08.872 real 0m1.686s 00:06:08.872 user 0m1.422s 00:06:08.872 sys 0m0.158s 00:06:08.872 15:52:02 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.872 15:52:02 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:08.872 ************************************ 00:06:08.872 END TEST accel_dif_generate_copy 00:06:08.872 ************************************ 00:06:08.872 15:52:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:08.872 15:52:02 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:08.872 15:52:02 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:08.872 15:52:02 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:08.872 15:52:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.872 15:52:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.872 ************************************ 00:06:08.872 START TEST accel_comp 00:06:08.872 ************************************ 00:06:08.872 15:52:02 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:08.872 15:52:02 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:08.872 15:52:02 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:08.872 15:52:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:08.872 15:52:02 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:08.872 15:52:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:08.872 15:52:02 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:08.872 15:52:02 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:08.872 15:52:02 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.872 15:52:02 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.872 15:52:02 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.872 15:52:02 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.872 15:52:02 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.872 15:52:02 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:08.872 15:52:02 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:08.872 [2024-07-15 15:52:02.513898] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:06:08.872 [2024-07-15 15:52:02.514044] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64331 ] 00:06:09.131 [2024-07-15 15:52:02.655356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.131 [2024-07-15 15:52:02.828831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.389 15:52:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:10.766 15:52:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:10.766 15:52:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.766 15:52:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:10.766 15:52:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:10.766 15:52:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:10.766 15:52:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.766 15:52:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:10.766 15:52:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:10.766 15:52:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:10.766 15:52:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.766 15:52:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:10.766 15:52:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:10.766 15:52:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:10.766 15:52:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.766 15:52:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:10.766 15:52:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:10.766 15:52:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:10.766 15:52:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.766 15:52:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:10.766 15:52:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:10.766 ************************************ 00:06:10.766 END TEST accel_comp 00:06:10.766 ************************************ 00:06:10.766 15:52:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:10.766 15:52:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.766 15:52:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:10.766 15:52:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:10.766 15:52:04 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:10.766 15:52:04 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:10.766 15:52:04 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.766 00:06:10.766 real 0m1.712s 00:06:10.766 user 0m1.433s 00:06:10.766 sys 0m0.174s 00:06:10.766 15:52:04 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.766 15:52:04 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:10.766 15:52:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:10.766 15:52:04 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:10.766 15:52:04 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:10.766 15:52:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.766 15:52:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.766 ************************************ 00:06:10.766 START TEST accel_decomp 00:06:10.766 ************************************ 00:06:10.766 15:52:04 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:10.766 15:52:04 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:10.766 15:52:04 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:10.766 15:52:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:10.766 15:52:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:10.766 15:52:04 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:10.766 15:52:04 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:10.766 15:52:04 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:10.766 15:52:04 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.766 15:52:04 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.766 15:52:04 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.766 15:52:04 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.766 15:52:04 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.766 15:52:04 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:10.766 15:52:04 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:10.766 [2024-07-15 15:52:04.275696] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:06:10.766 [2024-07-15 15:52:04.275805] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64368 ] 00:06:10.766 [2024-07-15 15:52:04.416165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.026 [2024-07-15 15:52:04.593343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.026 15:52:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.413 15:52:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:12.413 15:52:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.413 15:52:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.413 15:52:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.413 15:52:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:12.413 15:52:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.413 15:52:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.413 15:52:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.413 15:52:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:12.413 15:52:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.413 15:52:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.413 15:52:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.413 15:52:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:12.413 15:52:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.414 15:52:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.414 15:52:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.414 15:52:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:12.414 15:52:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.414 15:52:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.414 15:52:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.414 15:52:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:12.414 15:52:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.414 15:52:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.414 15:52:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.414 15:52:05 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.414 15:52:05 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:12.414 15:52:05 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.414 ************************************ 00:06:12.414 END TEST accel_decomp 00:06:12.414 ************************************ 00:06:12.414 00:06:12.414 real 0m1.715s 00:06:12.414 user 0m1.442s 00:06:12.414 sys 0m0.172s 00:06:12.414 15:52:05 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.414 15:52:05 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:12.414 15:52:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:12.414 15:52:06 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:12.414 15:52:06 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:12.414 15:52:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.414 15:52:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.414 ************************************ 00:06:12.414 START TEST accel_decomp_full 00:06:12.414 ************************************ 00:06:12.414 15:52:06 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:12.414 15:52:06 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:12.414 15:52:06 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:12.414 15:52:06 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:12.414 15:52:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:12.414 15:52:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:12.414 15:52:06 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:12.414 15:52:06 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:12.414 15:52:06 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.414 15:52:06 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.414 15:52:06 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.414 15:52:06 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.414 15:52:06 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.414 15:52:06 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:12.414 15:52:06 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:12.414 [2024-07-15 15:52:06.034821] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:06:12.414 [2024-07-15 15:52:06.034920] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64408 ] 00:06:12.672 [2024-07-15 15:52:06.171097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.672 [2024-07-15 15:52:06.339472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:12.929 15:52:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.300 15:52:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:14.300 15:52:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.300 15:52:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.300 15:52:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.300 15:52:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:14.300 15:52:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.300 15:52:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.300 15:52:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.300 15:52:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:14.300 15:52:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.300 15:52:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.300 15:52:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.300 15:52:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:14.300 15:52:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.300 15:52:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.300 15:52:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.300 15:52:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:14.300 15:52:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.300 15:52:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.300 15:52:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.300 15:52:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:14.300 15:52:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.300 15:52:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.300 15:52:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.300 15:52:07 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:14.300 15:52:07 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:14.300 15:52:07 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.300 00:06:14.300 real 0m1.676s 00:06:14.300 user 0m1.412s 00:06:14.300 sys 0m0.165s 00:06:14.300 15:52:07 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.300 15:52:07 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:14.300 ************************************ 00:06:14.300 END TEST accel_decomp_full 00:06:14.300 ************************************ 00:06:14.300 15:52:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:14.300 15:52:07 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:14.300 15:52:07 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:14.300 15:52:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.300 15:52:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.300 ************************************ 00:06:14.300 START TEST accel_decomp_mcore 00:06:14.300 ************************************ 00:06:14.300 15:52:07 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:14.300 15:52:07 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:14.300 15:52:07 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:14.300 15:52:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.300 15:52:07 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:14.300 15:52:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.300 15:52:07 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:14.300 15:52:07 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:14.300 15:52:07 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.300 15:52:07 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.300 15:52:07 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.300 15:52:07 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.300 15:52:07 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.300 15:52:07 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:14.300 15:52:07 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:14.300 [2024-07-15 15:52:07.768330] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:06:14.300 [2024-07-15 15:52:07.768425] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64443 ] 00:06:14.300 [2024-07-15 15:52:07.906253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:14.557 [2024-07-15 15:52:08.088245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.557 [2024-07-15 15:52:08.088515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.557 [2024-07-15 15:52:08.088648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:14.557 [2024-07-15 15:52:08.088656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.557 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:14.557 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.557 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.557 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.557 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:14.557 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.557 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.557 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.557 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:14.557 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.557 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.557 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.557 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:14.557 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.557 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.557 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.557 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:14.557 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.558 15:52:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.930 00:06:15.930 real 0m1.707s 00:06:15.930 user 0m5.041s 00:06:15.930 sys 0m0.185s 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.930 15:52:09 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:15.930 ************************************ 00:06:15.930 END TEST accel_decomp_mcore 00:06:15.930 ************************************ 00:06:15.930 15:52:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:15.930 15:52:09 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:15.930 15:52:09 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:15.930 15:52:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.930 15:52:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.930 ************************************ 00:06:15.930 START TEST accel_decomp_full_mcore 00:06:15.930 ************************************ 00:06:15.930 15:52:09 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:15.930 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:15.930 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:15.930 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.930 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.930 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:15.930 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:15.930 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:15.930 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.930 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.930 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.930 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.930 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.930 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:15.930 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:15.930 [2024-07-15 15:52:09.518712] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:06:15.930 [2024-07-15 15:52:09.518819] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64486 ] 00:06:15.930 [2024-07-15 15:52:09.653187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:16.190 [2024-07-15 15:52:09.813888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.190 [2024-07-15 15:52:09.814009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.190 [2024-07-15 15:52:09.814152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:16.190 [2024-07-15 15:52:09.814156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.190 15:52:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.562 00:06:17.562 real 0m1.685s 00:06:17.562 user 0m5.073s 00:06:17.562 sys 0m0.176s 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.562 15:52:11 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:17.562 ************************************ 00:06:17.562 END TEST accel_decomp_full_mcore 00:06:17.562 ************************************ 00:06:17.562 15:52:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:17.562 15:52:11 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:17.562 15:52:11 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:17.562 15:52:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.562 15:52:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.562 ************************************ 00:06:17.562 START TEST accel_decomp_mthread 00:06:17.562 ************************************ 00:06:17.562 15:52:11 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:17.562 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:17.563 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:17.563 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.563 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.563 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:17.563 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:17.563 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:17.563 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.563 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.563 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.563 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.563 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.563 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:17.563 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:17.563 [2024-07-15 15:52:11.252827] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:06:17.563 [2024-07-15 15:52:11.252914] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64518 ] 00:06:17.821 [2024-07-15 15:52:11.388984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.821 [2024-07-15 15:52:11.547166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.079 15:52:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.454 15:52:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:19.454 15:52:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.454 15:52:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.454 15:52:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.454 15:52:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:19.454 15:52:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.454 15:52:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.454 15:52:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.454 15:52:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:19.454 15:52:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.454 15:52:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.454 15:52:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.454 15:52:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:19.454 15:52:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.454 15:52:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.454 15:52:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.454 15:52:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:19.454 15:52:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.454 15:52:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.454 15:52:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.454 15:52:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:19.454 15:52:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.454 15:52:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.454 15:52:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.454 15:52:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:19.454 15:52:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.454 15:52:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.454 15:52:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.454 15:52:12 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.454 15:52:12 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:19.454 15:52:12 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.454 00:06:19.454 real 0m1.667s 00:06:19.454 user 0m1.409s 00:06:19.454 sys 0m0.158s 00:06:19.454 15:52:12 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.454 15:52:12 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:19.454 ************************************ 00:06:19.454 END TEST accel_decomp_mthread 00:06:19.454 ************************************ 00:06:19.454 15:52:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:19.454 15:52:12 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:19.454 15:52:12 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:19.454 15:52:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.454 15:52:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.454 ************************************ 00:06:19.454 START TEST accel_decomp_full_mthread 00:06:19.454 ************************************ 00:06:19.454 15:52:12 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:19.454 15:52:12 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:19.454 15:52:12 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:19.454 15:52:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.454 15:52:12 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:19.454 15:52:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.454 15:52:12 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:19.454 15:52:12 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:19.454 15:52:12 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.454 15:52:12 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.454 15:52:12 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.454 15:52:12 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.454 15:52:12 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.454 15:52:12 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:19.454 15:52:12 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:19.454 [2024-07-15 15:52:12.964938] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:06:19.455 [2024-07-15 15:52:12.965046] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64559 ] 00:06:19.455 [2024-07-15 15:52:13.104127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.716 [2024-07-15 15:52:13.295552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.716 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.716 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.716 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.716 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.716 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.716 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.716 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.716 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.716 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.716 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.716 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.716 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.716 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:19.716 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.716 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.716 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.716 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.716 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.716 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.716 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.716 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.716 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.716 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.716 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.716 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:19.716 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.716 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:19.716 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.717 15:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.108 15:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:21.108 15:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.108 15:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.108 15:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.108 15:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:21.108 15:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.108 15:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.108 15:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.108 15:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:21.108 15:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.108 15:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.108 15:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.108 15:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:21.108 15:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.108 15:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.108 15:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.108 15:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:21.108 15:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.108 15:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.108 15:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.108 15:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:21.108 15:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.108 15:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.108 15:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.108 15:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:21.108 15:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.108 15:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.108 15:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.108 15:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.108 15:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:21.108 15:52:14 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.108 00:06:21.108 real 0m1.718s 00:06:21.108 user 0m1.446s 00:06:21.108 sys 0m0.176s 00:06:21.108 15:52:14 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.108 15:52:14 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:21.108 ************************************ 00:06:21.108 END TEST accel_decomp_full_mthread 00:06:21.108 ************************************ 00:06:21.108 15:52:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:21.108 15:52:14 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:21.108 15:52:14 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:21.108 15:52:14 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:21.108 15:52:14 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.108 15:52:14 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.108 15:52:14 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:21.108 15:52:14 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.108 15:52:14 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.108 15:52:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.108 15:52:14 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.108 15:52:14 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:21.108 15:52:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.108 15:52:14 accel -- accel/accel.sh@41 -- # jq -r . 00:06:21.108 ************************************ 00:06:21.108 START TEST accel_dif_functional_tests 00:06:21.108 ************************************ 00:06:21.108 15:52:14 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:21.108 [2024-07-15 15:52:14.764512] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:06:21.108 [2024-07-15 15:52:14.764617] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64600 ] 00:06:21.366 [2024-07-15 15:52:14.905334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:21.366 [2024-07-15 15:52:15.085202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.366 [2024-07-15 15:52:15.085333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.366 [2024-07-15 15:52:15.085354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.624 00:06:21.624 00:06:21.624 CUnit - A unit testing framework for C - Version 2.1-3 00:06:21.624 http://cunit.sourceforge.net/ 00:06:21.624 00:06:21.624 00:06:21.624 Suite: accel_dif 00:06:21.624 Test: verify: DIF generated, GUARD check ...passed 00:06:21.624 Test: verify: DIF generated, APPTAG check ...passed 00:06:21.624 Test: verify: DIF generated, REFTAG check ...passed 00:06:21.624 Test: verify: DIF not generated, GUARD check ...[2024-07-15 15:52:15.226090] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:21.624 passed 00:06:21.624 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 15:52:15.226430] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:21.624 passed 00:06:21.624 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 15:52:15.226588] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:21.624 passed 00:06:21.624 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:21.624 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:06:21.624 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-07-15 15:52:15.226782] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:21.624 passed 00:06:21.624 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:21.624 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:21.624 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 15:52:15.227080] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:21.624 passed 00:06:21.624 Test: verify copy: DIF generated, GUARD check ...passed 00:06:21.624 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:21.624 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:21.625 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 15:52:15.227820] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:21.625 passed 00:06:21.625 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 15:52:15.228154] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:21.625 passed 00:06:21.625 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 15:52:15.228314] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:21.625 passed 00:06:21.625 Test: generate copy: DIF generated, GUARD check ...passed 00:06:21.625 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:21.625 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:21.625 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:21.625 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:21.625 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:21.625 Test: generate copy: iovecs-len validate ...passed 00:06:21.625 Test: generate copy: buffer alignment validate ...[2024-07-15 15:52:15.229099] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:21.625 passed 00:06:21.625 00:06:21.625 Run Summary: Type Total Ran Passed Failed Inactive 00:06:21.625 suites 1 1 n/a 0 0 00:06:21.625 tests 26 26 26 0 0 00:06:21.625 asserts 115 115 115 0 n/a 00:06:21.625 00:06:21.625 Elapsed time = 0.006 seconds 00:06:21.883 00:06:21.883 real 0m0.852s 00:06:21.883 user 0m1.215s 00:06:21.883 sys 0m0.216s 00:06:21.883 15:52:15 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.883 15:52:15 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:21.883 ************************************ 00:06:21.883 END TEST accel_dif_functional_tests 00:06:21.883 ************************************ 00:06:21.883 15:52:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:21.883 00:06:21.883 real 0m36.896s 00:06:21.883 user 0m38.652s 00:06:21.883 sys 0m4.566s 00:06:21.883 ************************************ 00:06:21.883 END TEST accel 00:06:21.883 ************************************ 00:06:21.883 15:52:15 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.883 15:52:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.142 15:52:15 -- common/autotest_common.sh@1142 -- # return 0 00:06:22.142 15:52:15 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:22.142 15:52:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:22.142 15:52:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.142 15:52:15 -- common/autotest_common.sh@10 -- # set +x 00:06:22.142 ************************************ 00:06:22.142 START TEST accel_rpc 00:06:22.142 ************************************ 00:06:22.142 15:52:15 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:22.142 * Looking for test storage... 00:06:22.142 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:22.142 15:52:15 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:22.142 15:52:15 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=64670 00:06:22.142 15:52:15 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:22.142 15:52:15 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 64670 00:06:22.142 15:52:15 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 64670 ']' 00:06:22.142 15:52:15 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.142 15:52:15 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.142 15:52:15 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.142 15:52:15 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.142 15:52:15 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.142 [2024-07-15 15:52:15.804361] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:06:22.142 [2024-07-15 15:52:15.804468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64670 ] 00:06:22.401 [2024-07-15 15:52:15.942995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.401 [2024-07-15 15:52:16.093946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.333 15:52:16 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.333 15:52:16 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:23.333 15:52:16 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:23.333 15:52:16 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:23.333 15:52:16 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:23.333 15:52:16 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:23.333 15:52:16 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:23.333 15:52:16 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:23.333 15:52:16 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.333 15:52:16 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.333 ************************************ 00:06:23.333 START TEST accel_assign_opcode 00:06:23.333 ************************************ 00:06:23.333 15:52:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:23.333 15:52:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:23.333 15:52:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.333 15:52:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:23.333 [2024-07-15 15:52:16.814568] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:23.333 15:52:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.333 15:52:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:23.333 15:52:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.333 15:52:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:23.333 [2024-07-15 15:52:16.822547] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:23.333 15:52:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.333 15:52:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:23.333 15:52:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.333 15:52:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:23.590 15:52:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.590 15:52:17 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:23.590 15:52:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.590 15:52:17 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:23.590 15:52:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:23.590 15:52:17 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:23.590 15:52:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.590 software 00:06:23.590 00:06:23.590 real 0m0.391s 00:06:23.590 user 0m0.047s 00:06:23.590 sys 0m0.009s 00:06:23.590 15:52:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.590 ************************************ 00:06:23.591 END TEST accel_assign_opcode 00:06:23.591 ************************************ 00:06:23.591 15:52:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:23.591 15:52:17 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:23.591 15:52:17 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 64670 00:06:23.591 15:52:17 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 64670 ']' 00:06:23.591 15:52:17 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 64670 00:06:23.591 15:52:17 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:23.591 15:52:17 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:23.591 15:52:17 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64670 00:06:23.591 15:52:17 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:23.591 killing process with pid 64670 00:06:23.591 15:52:17 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:23.591 15:52:17 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64670' 00:06:23.591 15:52:17 accel_rpc -- common/autotest_common.sh@967 -- # kill 64670 00:06:23.591 15:52:17 accel_rpc -- common/autotest_common.sh@972 -- # wait 64670 00:06:24.156 00:06:24.156 real 0m2.209s 00:06:24.156 user 0m2.201s 00:06:24.156 sys 0m0.563s 00:06:24.156 15:52:17 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.156 ************************************ 00:06:24.156 END TEST accel_rpc 00:06:24.156 ************************************ 00:06:24.156 15:52:17 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.413 15:52:17 -- common/autotest_common.sh@1142 -- # return 0 00:06:24.413 15:52:17 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:24.413 15:52:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:24.413 15:52:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.413 15:52:17 -- common/autotest_common.sh@10 -- # set +x 00:06:24.413 ************************************ 00:06:24.413 START TEST app_cmdline 00:06:24.413 ************************************ 00:06:24.413 15:52:17 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:24.413 * Looking for test storage... 00:06:24.413 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:24.413 15:52:17 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:24.413 15:52:17 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=64781 00:06:24.413 15:52:17 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:24.413 15:52:17 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 64781 00:06:24.413 15:52:17 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 64781 ']' 00:06:24.413 15:52:17 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.413 15:52:17 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.413 15:52:17 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.413 15:52:17 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.413 15:52:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:24.413 [2024-07-15 15:52:18.080081] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:06:24.413 [2024-07-15 15:52:18.080212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64781 ] 00:06:24.671 [2024-07-15 15:52:18.225993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.671 [2024-07-15 15:52:18.387215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.604 15:52:19 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.604 15:52:19 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:25.604 15:52:19 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:25.862 { 00:06:25.862 "fields": { 00:06:25.862 "commit": "2f3522da7", 00:06:25.862 "major": 24, 00:06:25.862 "minor": 9, 00:06:25.862 "patch": 0, 00:06:25.862 "suffix": "-pre" 00:06:25.862 }, 00:06:25.862 "version": "SPDK v24.09-pre git sha1 2f3522da7" 00:06:25.862 } 00:06:25.862 15:52:19 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:25.862 15:52:19 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:25.862 15:52:19 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:25.862 15:52:19 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:25.862 15:52:19 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:25.862 15:52:19 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.862 15:52:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:25.862 15:52:19 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:25.862 15:52:19 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:25.862 15:52:19 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.862 15:52:19 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:25.862 15:52:19 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:25.862 15:52:19 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:25.862 15:52:19 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:25.862 15:52:19 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:25.862 15:52:19 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:25.862 15:52:19 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:25.862 15:52:19 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:25.862 15:52:19 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:25.862 15:52:19 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:25.862 15:52:19 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:25.862 15:52:19 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:25.862 15:52:19 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:25.862 15:52:19 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:26.120 2024/07/15 15:52:19 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:06:26.121 request: 00:06:26.121 { 00:06:26.121 "method": "env_dpdk_get_mem_stats", 00:06:26.121 "params": {} 00:06:26.121 } 00:06:26.121 Got JSON-RPC error response 00:06:26.121 GoRPCClient: error on JSON-RPC call 00:06:26.121 15:52:19 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:26.121 15:52:19 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:26.121 15:52:19 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:26.121 15:52:19 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:26.121 15:52:19 app_cmdline -- app/cmdline.sh@1 -- # killprocess 64781 00:06:26.121 15:52:19 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 64781 ']' 00:06:26.121 15:52:19 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 64781 00:06:26.121 15:52:19 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:26.121 15:52:19 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:26.121 15:52:19 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64781 00:06:26.121 15:52:19 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:26.121 15:52:19 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:26.121 15:52:19 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64781' 00:06:26.121 killing process with pid 64781 00:06:26.121 15:52:19 app_cmdline -- common/autotest_common.sh@967 -- # kill 64781 00:06:26.121 15:52:19 app_cmdline -- common/autotest_common.sh@972 -- # wait 64781 00:06:26.687 00:06:26.687 real 0m2.317s 00:06:26.687 user 0m2.743s 00:06:26.687 sys 0m0.617s 00:06:26.687 15:52:20 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.687 15:52:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:26.687 ************************************ 00:06:26.687 END TEST app_cmdline 00:06:26.687 ************************************ 00:06:26.687 15:52:20 -- common/autotest_common.sh@1142 -- # return 0 00:06:26.687 15:52:20 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:26.687 15:52:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:26.687 15:52:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.687 15:52:20 -- common/autotest_common.sh@10 -- # set +x 00:06:26.687 ************************************ 00:06:26.687 START TEST version 00:06:26.687 ************************************ 00:06:26.687 15:52:20 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:26.687 * Looking for test storage... 00:06:26.687 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:26.687 15:52:20 version -- app/version.sh@17 -- # get_header_version major 00:06:26.687 15:52:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:26.687 15:52:20 version -- app/version.sh@14 -- # cut -f2 00:06:26.687 15:52:20 version -- app/version.sh@14 -- # tr -d '"' 00:06:26.687 15:52:20 version -- app/version.sh@17 -- # major=24 00:06:26.687 15:52:20 version -- app/version.sh@18 -- # get_header_version minor 00:06:26.687 15:52:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:26.687 15:52:20 version -- app/version.sh@14 -- # cut -f2 00:06:26.687 15:52:20 version -- app/version.sh@14 -- # tr -d '"' 00:06:26.687 15:52:20 version -- app/version.sh@18 -- # minor=9 00:06:26.687 15:52:20 version -- app/version.sh@19 -- # get_header_version patch 00:06:26.687 15:52:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:26.687 15:52:20 version -- app/version.sh@14 -- # tr -d '"' 00:06:26.687 15:52:20 version -- app/version.sh@14 -- # cut -f2 00:06:26.687 15:52:20 version -- app/version.sh@19 -- # patch=0 00:06:26.687 15:52:20 version -- app/version.sh@20 -- # get_header_version suffix 00:06:26.687 15:52:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:26.687 15:52:20 version -- app/version.sh@14 -- # cut -f2 00:06:26.687 15:52:20 version -- app/version.sh@14 -- # tr -d '"' 00:06:26.687 15:52:20 version -- app/version.sh@20 -- # suffix=-pre 00:06:26.687 15:52:20 version -- app/version.sh@22 -- # version=24.9 00:06:26.687 15:52:20 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:26.687 15:52:20 version -- app/version.sh@28 -- # version=24.9rc0 00:06:26.687 15:52:20 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:26.687 15:52:20 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:26.945 15:52:20 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:26.945 15:52:20 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:26.945 ************************************ 00:06:26.945 END TEST version 00:06:26.945 ************************************ 00:06:26.945 00:06:26.945 real 0m0.158s 00:06:26.945 user 0m0.104s 00:06:26.945 sys 0m0.085s 00:06:26.945 15:52:20 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.945 15:52:20 version -- common/autotest_common.sh@10 -- # set +x 00:06:26.945 15:52:20 -- common/autotest_common.sh@1142 -- # return 0 00:06:26.945 15:52:20 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:26.945 15:52:20 -- spdk/autotest.sh@198 -- # uname -s 00:06:26.945 15:52:20 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:26.945 15:52:20 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:26.945 15:52:20 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:26.945 15:52:20 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:26.945 15:52:20 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:26.945 15:52:20 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:26.945 15:52:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:26.945 15:52:20 -- common/autotest_common.sh@10 -- # set +x 00:06:26.945 15:52:20 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:26.945 15:52:20 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:26.945 15:52:20 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:26.945 15:52:20 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:26.945 15:52:20 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:26.945 15:52:20 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:26.945 15:52:20 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:26.945 15:52:20 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:26.945 15:52:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.945 15:52:20 -- common/autotest_common.sh@10 -- # set +x 00:06:26.945 ************************************ 00:06:26.945 START TEST nvmf_tcp 00:06:26.945 ************************************ 00:06:26.945 15:52:20 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:26.945 * Looking for test storage... 00:06:26.945 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:26.945 15:52:20 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:26.945 15:52:20 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:26.945 15:52:20 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:26.945 15:52:20 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:26.945 15:52:20 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:26.945 15:52:20 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:26.945 15:52:20 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:26.945 15:52:20 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:26.945 15:52:20 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:26.945 15:52:20 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:26.945 15:52:20 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:26.945 15:52:20 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:26.945 15:52:20 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:26.945 15:52:20 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:26.945 15:52:20 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:06:26.945 15:52:20 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:06:26.945 15:52:20 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:26.945 15:52:20 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:26.945 15:52:20 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:26.945 15:52:20 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:26.945 15:52:20 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:26.945 15:52:20 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:26.945 15:52:20 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:26.945 15:52:20 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:26.945 15:52:20 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.945 15:52:20 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.946 15:52:20 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.946 15:52:20 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:26.946 15:52:20 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.946 15:52:20 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:26.946 15:52:20 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:26.946 15:52:20 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:26.946 15:52:20 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:26.946 15:52:20 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:26.946 15:52:20 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:26.946 15:52:20 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:26.946 15:52:20 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:26.946 15:52:20 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:26.946 15:52:20 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:26.946 15:52:20 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:26.946 15:52:20 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:26.946 15:52:20 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:26.946 15:52:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:26.946 15:52:20 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:26.946 15:52:20 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:26.946 15:52:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:26.946 15:52:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.946 15:52:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:26.946 ************************************ 00:06:26.946 START TEST nvmf_example 00:06:26.946 ************************************ 00:06:26.946 15:52:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:27.204 * Looking for test storage... 00:06:27.204 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:27.204 15:52:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:27.204 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:27.204 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:27.204 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:27.204 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:27.204 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:27.204 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:27.204 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:27.204 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:27.204 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:27.204 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:27.204 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:27.205 Cannot find device "nvmf_init_br" 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # true 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:27.205 Cannot find device "nvmf_tgt_br" 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # true 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:27.205 Cannot find device "nvmf_tgt_br2" 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # true 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:27.205 Cannot find device "nvmf_init_br" 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # true 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:27.205 Cannot find device "nvmf_tgt_br" 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # true 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:27.205 Cannot find device "nvmf_tgt_br2" 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # true 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:27.205 Cannot find device "nvmf_br" 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # true 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:27.205 Cannot find device "nvmf_init_if" 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # true 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:27.205 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # true 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:27.205 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # true 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:27.205 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:27.463 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:27.463 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:27.463 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:27.463 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:27.463 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:27.463 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:27.463 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:27.463 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:27.463 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:27.463 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:27.463 15:52:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:27.463 15:52:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:27.463 15:52:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:27.463 15:52:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:27.463 15:52:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:27.464 15:52:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:27.464 15:52:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:27.464 15:52:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:27.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:27.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:06:27.464 00:06:27.464 --- 10.0.0.2 ping statistics --- 00:06:27.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:27.464 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:06:27.464 15:52:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:27.464 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:27.464 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:06:27.464 00:06:27.464 --- 10.0.0.3 ping statistics --- 00:06:27.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:27.464 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:06:27.464 15:52:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:27.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:27.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:06:27.464 00:06:27.464 --- 10.0.0.1 ping statistics --- 00:06:27.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:27.464 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:06:27.464 15:52:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:27.464 15:52:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@433 -- # return 0 00:06:27.464 15:52:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:27.464 15:52:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:27.464 15:52:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:27.464 15:52:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:27.464 15:52:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:27.464 15:52:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:27.464 15:52:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:27.464 15:52:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:27.464 15:52:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:27.464 15:52:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:27.464 15:52:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:27.464 15:52:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:27.464 15:52:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:27.464 15:52:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=65140 00:06:27.464 15:52:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:27.464 15:52:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:27.464 15:52:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 65140 00:06:27.464 15:52:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 65140 ']' 00:06:27.464 15:52:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.464 15:52:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.464 15:52:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.464 15:52:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.464 15:52:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:28.836 15:52:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.836 15:52:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:06:28.836 15:52:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:28.836 15:52:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:28.836 15:52:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:28.836 15:52:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:28.836 15:52:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.836 15:52:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:28.836 15:52:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.836 15:52:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:28.836 15:52:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.836 15:52:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:28.836 15:52:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.836 15:52:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:28.836 15:52:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:28.836 15:52:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.836 15:52:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:28.836 15:52:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.836 15:52:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:28.836 15:52:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:28.836 15:52:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.836 15:52:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:28.836 15:52:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.836 15:52:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:28.836 15:52:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.836 15:52:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:28.836 15:52:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.836 15:52:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:06:28.836 15:52:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:41.042 Initializing NVMe Controllers 00:06:41.042 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:41.042 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:41.042 Initialization complete. Launching workers. 00:06:41.042 ======================================================== 00:06:41.042 Latency(us) 00:06:41.042 Device Information : IOPS MiB/s Average min max 00:06:41.042 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14729.00 57.54 4347.04 873.48 23015.12 00:06:41.042 ======================================================== 00:06:41.042 Total : 14729.00 57.54 4347.04 873.48 23015.12 00:06:41.042 00:06:41.042 15:52:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:41.042 15:52:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:41.042 15:52:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:41.042 15:52:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:41.042 15:52:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:41.042 15:52:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:41.042 15:52:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:41.042 15:52:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:41.042 rmmod nvme_tcp 00:06:41.042 rmmod nvme_fabrics 00:06:41.042 rmmod nvme_keyring 00:06:41.042 15:52:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:41.042 15:52:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:41.042 15:52:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:41.042 15:52:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 65140 ']' 00:06:41.042 15:52:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 65140 00:06:41.042 15:52:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 65140 ']' 00:06:41.042 15:52:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 65140 00:06:41.042 15:52:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:06:41.042 15:52:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:41.042 15:52:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65140 00:06:41.042 15:52:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:06:41.042 15:52:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:06:41.042 killing process with pid 65140 00:06:41.042 15:52:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65140' 00:06:41.042 15:52:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 65140 00:06:41.042 15:52:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 65140 00:06:41.042 nvmf threads initialize successfully 00:06:41.042 bdev subsystem init successfully 00:06:41.042 created a nvmf target service 00:06:41.042 create targets's poll groups done 00:06:41.042 all subsystems of target started 00:06:41.042 nvmf target is running 00:06:41.042 all subsystems of target stopped 00:06:41.042 destroy targets's poll groups done 00:06:41.042 destroyed the nvmf target service 00:06:41.042 bdev subsystem finish successfully 00:06:41.042 nvmf threads destroy successfully 00:06:41.042 15:52:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:41.042 15:52:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:41.042 15:52:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:41.042 15:52:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:41.042 15:52:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:41.042 15:52:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:41.042 15:52:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:41.042 15:52:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:41.042 15:52:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:06:41.042 15:52:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:41.042 15:52:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:41.042 15:52:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:41.042 ************************************ 00:06:41.042 END TEST nvmf_example 00:06:41.042 ************************************ 00:06:41.042 00:06:41.042 real 0m12.342s 00:06:41.042 user 0m43.932s 00:06:41.042 sys 0m2.206s 00:06:41.042 15:52:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.042 15:52:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:41.042 15:52:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:41.042 15:52:33 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:41.042 15:52:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:41.042 15:52:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.042 15:52:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:41.042 ************************************ 00:06:41.042 START TEST nvmf_filesystem 00:06:41.042 ************************************ 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:41.042 * Looking for test storage... 00:06:41.042 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:41.042 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=y 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=y 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:41.043 #define SPDK_CONFIG_H 00:06:41.043 #define SPDK_CONFIG_APPS 1 00:06:41.043 #define SPDK_CONFIG_ARCH native 00:06:41.043 #undef SPDK_CONFIG_ASAN 00:06:41.043 #define SPDK_CONFIG_AVAHI 1 00:06:41.043 #undef SPDK_CONFIG_CET 00:06:41.043 #define SPDK_CONFIG_COVERAGE 1 00:06:41.043 #define SPDK_CONFIG_CROSS_PREFIX 00:06:41.043 #undef SPDK_CONFIG_CRYPTO 00:06:41.043 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:41.043 #undef SPDK_CONFIG_CUSTOMOCF 00:06:41.043 #undef SPDK_CONFIG_DAOS 00:06:41.043 #define SPDK_CONFIG_DAOS_DIR 00:06:41.043 #define SPDK_CONFIG_DEBUG 1 00:06:41.043 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:41.043 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:41.043 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:41.043 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:41.043 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:41.043 #undef SPDK_CONFIG_DPDK_UADK 00:06:41.043 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:41.043 #define SPDK_CONFIG_EXAMPLES 1 00:06:41.043 #undef SPDK_CONFIG_FC 00:06:41.043 #define SPDK_CONFIG_FC_PATH 00:06:41.043 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:41.043 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:41.043 #undef SPDK_CONFIG_FUSE 00:06:41.043 #undef SPDK_CONFIG_FUZZER 00:06:41.043 #define SPDK_CONFIG_FUZZER_LIB 00:06:41.043 #define SPDK_CONFIG_GOLANG 1 00:06:41.043 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:41.043 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:41.043 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:41.043 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:41.043 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:41.043 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:41.043 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:41.043 #define SPDK_CONFIG_IDXD 1 00:06:41.043 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:41.043 #undef SPDK_CONFIG_IPSEC_MB 00:06:41.043 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:41.043 #define SPDK_CONFIG_ISAL 1 00:06:41.043 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:41.043 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:41.043 #define SPDK_CONFIG_LIBDIR 00:06:41.043 #undef SPDK_CONFIG_LTO 00:06:41.043 #define SPDK_CONFIG_MAX_LCORES 128 00:06:41.043 #define SPDK_CONFIG_NVME_CUSE 1 00:06:41.043 #undef SPDK_CONFIG_OCF 00:06:41.043 #define SPDK_CONFIG_OCF_PATH 00:06:41.043 #define SPDK_CONFIG_OPENSSL_PATH 00:06:41.043 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:41.043 #define SPDK_CONFIG_PGO_DIR 00:06:41.043 #undef SPDK_CONFIG_PGO_USE 00:06:41.043 #define SPDK_CONFIG_PREFIX /usr/local 00:06:41.043 #undef SPDK_CONFIG_RAID5F 00:06:41.043 #undef SPDK_CONFIG_RBD 00:06:41.043 #define SPDK_CONFIG_RDMA 1 00:06:41.043 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:41.043 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:41.043 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:41.043 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:41.043 #define SPDK_CONFIG_SHARED 1 00:06:41.043 #undef SPDK_CONFIG_SMA 00:06:41.043 #define SPDK_CONFIG_TESTS 1 00:06:41.043 #undef SPDK_CONFIG_TSAN 00:06:41.043 #define SPDK_CONFIG_UBLK 1 00:06:41.043 #define SPDK_CONFIG_UBSAN 1 00:06:41.043 #undef SPDK_CONFIG_UNIT_TESTS 00:06:41.043 #undef SPDK_CONFIG_URING 00:06:41.043 #define SPDK_CONFIG_URING_PATH 00:06:41.043 #undef SPDK_CONFIG_URING_ZNS 00:06:41.043 #define SPDK_CONFIG_USDT 1 00:06:41.043 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:41.043 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:41.043 #undef SPDK_CONFIG_VFIO_USER 00:06:41.043 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:41.043 #define SPDK_CONFIG_VHOST 1 00:06:41.043 #define SPDK_CONFIG_VIRTIO 1 00:06:41.043 #undef SPDK_CONFIG_VTUNE 00:06:41.043 #define SPDK_CONFIG_VTUNE_DIR 00:06:41.043 #define SPDK_CONFIG_WERROR 1 00:06:41.043 #define SPDK_CONFIG_WPDK_DIR 00:06:41.043 #undef SPDK_CONFIG_XNVME 00:06:41.043 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.043 15:52:33 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:41.044 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 1 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 65381 ]] 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 65381 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.GLy2wJ 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.GLy2wJ/tests/target /tmp/spdk.GLy2wJ 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=devtmpfs 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=4194304 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=4194304 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6264516608 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267891712 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=2494353408 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=2507157504 00:06:41.045 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12804096 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13786030080 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5244149760 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13786030080 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5244149760 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda2 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=843546624 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1012768768 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=100016128 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6267756544 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267891712 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=135168 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda3 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=92499968 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=104607744 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12107776 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=1253572608 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253576704 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=93344407552 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6358372352 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:41.046 * Looking for test storage... 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/home 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=13786030080 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == tmpfs ]] 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == ramfs ]] 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ /home == / ]] 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:41.046 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:41.046 15:52:33 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:41.047 Cannot find device "nvmf_tgt_br" 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # true 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:41.047 Cannot find device "nvmf_tgt_br2" 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # true 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:41.047 Cannot find device "nvmf_tgt_br" 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # true 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:41.047 Cannot find device "nvmf_tgt_br2" 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # true 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:41.047 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:41.047 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:41.047 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:41.047 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:06:41.047 00:06:41.047 --- 10.0.0.2 ping statistics --- 00:06:41.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:41.047 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:41.047 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:41.047 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:06:41.047 00:06:41.047 --- 10.0.0.3 ping statistics --- 00:06:41.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:41.047 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:41.047 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:41.047 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:06:41.047 00:06:41.047 --- 10.0.0.1 ping statistics --- 00:06:41.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:41.047 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@433 -- # return 0 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:41.047 ************************************ 00:06:41.047 START TEST nvmf_filesystem_no_in_capsule 00:06:41.047 ************************************ 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:41.047 15:52:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:41.048 15:52:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:41.048 15:52:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:41.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.048 15:52:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=65543 00:06:41.048 15:52:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 65543 00:06:41.048 15:52:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 65543 ']' 00:06:41.048 15:52:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:41.048 15:52:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.048 15:52:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.048 15:52:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.048 15:52:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.048 15:52:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:41.048 [2024-07-15 15:52:33.661276] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:06:41.048 [2024-07-15 15:52:33.661388] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:41.048 [2024-07-15 15:52:33.805598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:41.048 [2024-07-15 15:52:33.952352] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:41.048 [2024-07-15 15:52:33.952405] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:41.048 [2024-07-15 15:52:33.952419] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:41.048 [2024-07-15 15:52:33.952430] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:41.048 [2024-07-15 15:52:33.952440] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:41.048 [2024-07-15 15:52:33.952578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.048 [2024-07-15 15:52:33.952735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.048 [2024-07-15 15:52:33.953274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:41.048 [2024-07-15 15:52:33.953403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.048 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.048 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:41.048 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:41.048 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:41.048 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:41.048 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:41.048 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:41.048 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:41.048 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.048 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:41.048 [2024-07-15 15:52:34.722846] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:41.048 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.048 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:41.048 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.048 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:41.307 Malloc1 00:06:41.307 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.307 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:41.307 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.307 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:41.307 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.307 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:41.307 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.307 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:41.307 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.307 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:41.307 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.307 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:41.307 [2024-07-15 15:52:34.924682] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:41.307 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.307 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:41.307 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:41.307 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:41.307 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:41.307 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:41.307 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:41.307 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.307 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:41.307 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.307 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:41.307 { 00:06:41.307 "aliases": [ 00:06:41.307 "1671248d-70aa-45ba-9747-95dfbc141909" 00:06:41.307 ], 00:06:41.307 "assigned_rate_limits": { 00:06:41.307 "r_mbytes_per_sec": 0, 00:06:41.307 "rw_ios_per_sec": 0, 00:06:41.307 "rw_mbytes_per_sec": 0, 00:06:41.307 "w_mbytes_per_sec": 0 00:06:41.307 }, 00:06:41.307 "block_size": 512, 00:06:41.307 "claim_type": "exclusive_write", 00:06:41.307 "claimed": true, 00:06:41.307 "driver_specific": {}, 00:06:41.307 "memory_domains": [ 00:06:41.307 { 00:06:41.307 "dma_device_id": "system", 00:06:41.307 "dma_device_type": 1 00:06:41.307 }, 00:06:41.307 { 00:06:41.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.307 "dma_device_type": 2 00:06:41.307 } 00:06:41.307 ], 00:06:41.307 "name": "Malloc1", 00:06:41.307 "num_blocks": 1048576, 00:06:41.307 "product_name": "Malloc disk", 00:06:41.307 "supported_io_types": { 00:06:41.307 "abort": true, 00:06:41.307 "compare": false, 00:06:41.307 "compare_and_write": false, 00:06:41.307 "copy": true, 00:06:41.307 "flush": true, 00:06:41.307 "get_zone_info": false, 00:06:41.307 "nvme_admin": false, 00:06:41.307 "nvme_io": false, 00:06:41.307 "nvme_io_md": false, 00:06:41.307 "nvme_iov_md": false, 00:06:41.307 "read": true, 00:06:41.307 "reset": true, 00:06:41.307 "seek_data": false, 00:06:41.307 "seek_hole": false, 00:06:41.307 "unmap": true, 00:06:41.307 "write": true, 00:06:41.307 "write_zeroes": true, 00:06:41.307 "zcopy": true, 00:06:41.307 "zone_append": false, 00:06:41.307 "zone_management": false 00:06:41.307 }, 00:06:41.307 "uuid": "1671248d-70aa-45ba-9747-95dfbc141909", 00:06:41.307 "zoned": false 00:06:41.307 } 00:06:41.307 ]' 00:06:41.307 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:41.307 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:41.307 15:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:41.565 15:52:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:41.565 15:52:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:41.565 15:52:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:41.565 15:52:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:41.565 15:52:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid=a185c444-aaeb-4d13-aa60-df1b0266600d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:41.565 15:52:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:41.565 15:52:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:41.565 15:52:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:41.565 15:52:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:41.565 15:52:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:44.094 15:52:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:44.094 15:52:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:44.094 15:52:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:44.094 15:52:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:44.094 15:52:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:44.094 15:52:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:44.094 15:52:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:44.094 15:52:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:44.094 15:52:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:44.094 15:52:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:44.094 15:52:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:44.094 15:52:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:44.094 15:52:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:44.094 15:52:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:44.094 15:52:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:44.094 15:52:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:44.094 15:52:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:44.094 15:52:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:44.094 15:52:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:45.029 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:45.029 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:45.029 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:45.029 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.029 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:45.029 ************************************ 00:06:45.029 START TEST filesystem_ext4 00:06:45.029 ************************************ 00:06:45.029 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:45.029 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:45.029 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:45.029 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:45.029 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:45.029 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:45.029 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:45.029 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:45.029 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:45.029 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:45.029 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:45.029 mke2fs 1.46.5 (30-Dec-2021) 00:06:45.029 Discarding device blocks: 0/522240 done 00:06:45.029 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:45.029 Filesystem UUID: 8478fe0b-8f33-4a78-90b4-2bb78d2469cf 00:06:45.029 Superblock backups stored on blocks: 00:06:45.029 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:45.029 00:06:45.029 Allocating group tables: 0/64 done 00:06:45.029 Writing inode tables: 0/64 done 00:06:45.029 Creating journal (8192 blocks): done 00:06:45.029 Writing superblocks and filesystem accounting information: 0/64 done 00:06:45.029 00:06:45.029 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:45.029 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:45.029 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:45.029 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:45.029 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:45.029 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:45.029 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:45.029 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:45.029 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 65543 00:06:45.029 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:45.029 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:45.029 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:45.029 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:45.288 ************************************ 00:06:45.288 END TEST filesystem_ext4 00:06:45.288 ************************************ 00:06:45.288 00:06:45.288 real 0m0.319s 00:06:45.288 user 0m0.028s 00:06:45.288 sys 0m0.053s 00:06:45.288 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.288 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:45.288 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:45.288 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:45.288 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:45.288 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.288 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:45.288 ************************************ 00:06:45.288 START TEST filesystem_btrfs 00:06:45.288 ************************************ 00:06:45.288 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:45.288 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:45.288 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:45.288 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:45.288 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:45.288 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:45.288 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:45.288 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:45.288 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:45.288 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:45.288 15:52:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:45.547 btrfs-progs v6.6.2 00:06:45.547 See https://btrfs.readthedocs.io for more information. 00:06:45.547 00:06:45.547 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:45.547 NOTE: several default settings have changed in version 5.15, please make sure 00:06:45.547 this does not affect your deployments: 00:06:45.547 - DUP for metadata (-m dup) 00:06:45.547 - enabled no-holes (-O no-holes) 00:06:45.547 - enabled free-space-tree (-R free-space-tree) 00:06:45.547 00:06:45.547 Label: (null) 00:06:45.547 UUID: dcf135d4-6d08-497d-940d-6c33f123e8a6 00:06:45.547 Node size: 16384 00:06:45.547 Sector size: 4096 00:06:45.547 Filesystem size: 510.00MiB 00:06:45.547 Block group profiles: 00:06:45.547 Data: single 8.00MiB 00:06:45.547 Metadata: DUP 32.00MiB 00:06:45.547 System: DUP 8.00MiB 00:06:45.547 SSD detected: yes 00:06:45.547 Zoned device: no 00:06:45.547 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:45.547 Runtime features: free-space-tree 00:06:45.547 Checksum: crc32c 00:06:45.547 Number of devices: 1 00:06:45.547 Devices: 00:06:45.547 ID SIZE PATH 00:06:45.547 1 510.00MiB /dev/nvme0n1p1 00:06:45.547 00:06:45.547 15:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:45.547 15:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:45.547 15:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:45.547 15:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:45.547 15:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:45.547 15:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:45.547 15:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:45.547 15:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:45.547 15:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 65543 00:06:45.547 15:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:45.547 15:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:45.547 15:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:45.547 15:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:45.547 ************************************ 00:06:45.547 END TEST filesystem_btrfs 00:06:45.547 ************************************ 00:06:45.547 00:06:45.547 real 0m0.357s 00:06:45.547 user 0m0.025s 00:06:45.547 sys 0m0.062s 00:06:45.547 15:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.547 15:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:45.547 15:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:45.547 15:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:45.547 15:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:45.547 15:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.547 15:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:45.547 ************************************ 00:06:45.547 START TEST filesystem_xfs 00:06:45.547 ************************************ 00:06:45.547 15:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:45.547 15:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:45.547 15:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:45.547 15:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:45.547 15:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:45.547 15:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:45.547 15:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:45.547 15:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:06:45.547 15:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:45.547 15:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:45.547 15:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:45.806 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:45.806 = sectsz=512 attr=2, projid32bit=1 00:06:45.806 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:45.806 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:45.806 data = bsize=4096 blocks=130560, imaxpct=25 00:06:45.806 = sunit=0 swidth=0 blks 00:06:45.806 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:45.806 log =internal log bsize=4096 blocks=16384, version=2 00:06:45.806 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:45.806 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:46.373 Discarding blocks...Done. 00:06:46.373 15:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:46.373 15:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:48.903 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:48.903 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:06:48.903 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:48.903 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:06:48.903 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:06:48.903 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:48.903 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 65543 00:06:48.903 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:48.903 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:48.903 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:48.903 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:48.903 ************************************ 00:06:48.903 END TEST filesystem_xfs 00:06:48.903 ************************************ 00:06:48.903 00:06:48.903 real 0m3.164s 00:06:48.903 user 0m0.018s 00:06:48.903 sys 0m0.060s 00:06:48.903 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.903 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:48.903 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:48.903 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:48.903 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:48.903 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:48.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:48.903 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:48.903 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:48.903 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:48.903 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:48.903 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:48.904 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:48.904 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:48.904 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:48.904 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.904 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:48.904 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.904 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:48.904 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 65543 00:06:48.904 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 65543 ']' 00:06:48.904 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 65543 00:06:48.904 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:48.904 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:48.904 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65543 00:06:48.904 killing process with pid 65543 00:06:48.904 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:48.904 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:48.904 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65543' 00:06:48.904 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 65543 00:06:48.904 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 65543 00:06:49.470 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:49.470 00:06:49.470 real 0m9.357s 00:06:49.470 user 0m35.257s 00:06:49.470 sys 0m1.523s 00:06:49.470 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.470 ************************************ 00:06:49.470 END TEST nvmf_filesystem_no_in_capsule 00:06:49.470 ************************************ 00:06:49.470 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:49.470 15:52:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:49.470 15:52:42 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:49.470 15:52:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:49.470 15:52:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.470 15:52:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:49.470 ************************************ 00:06:49.470 START TEST nvmf_filesystem_in_capsule 00:06:49.470 ************************************ 00:06:49.470 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:06:49.470 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:49.470 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:49.470 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:49.470 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:49.470 15:52:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:49.470 15:52:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=65854 00:06:49.470 15:52:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:49.470 15:52:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 65854 00:06:49.470 15:52:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 65854 ']' 00:06:49.470 15:52:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.470 15:52:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.470 15:52:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.470 15:52:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.470 15:52:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:49.470 [2024-07-15 15:52:43.076788] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:06:49.470 [2024-07-15 15:52:43.076902] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:49.727 [2024-07-15 15:52:43.219689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:49.727 [2024-07-15 15:52:43.337252] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:49.727 [2024-07-15 15:52:43.337325] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:49.727 [2024-07-15 15:52:43.337338] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:49.727 [2024-07-15 15:52:43.337347] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:49.727 [2024-07-15 15:52:43.337355] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:49.727 [2024-07-15 15:52:43.337510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.727 [2024-07-15 15:52:43.337615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.727 [2024-07-15 15:52:43.337697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.727 [2024-07-15 15:52:43.337698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.659 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.659 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:50.659 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:50.659 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:50.659 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:50.659 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:50.659 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:50.659 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:50.659 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.659 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:50.659 [2024-07-15 15:52:44.112610] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:50.659 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.659 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:50.659 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.659 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:50.659 Malloc1 00:06:50.659 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.659 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:50.659 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.659 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:50.659 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.659 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:50.659 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.659 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:50.659 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.659 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:50.659 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.659 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:50.659 [2024-07-15 15:52:44.299998] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:50.659 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.659 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:50.659 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:50.659 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:50.659 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:50.659 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:50.659 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:50.659 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.659 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:50.659 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.659 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:50.659 { 00:06:50.659 "aliases": [ 00:06:50.659 "ff81303f-bf27-4fb0-82a2-fa978b06103c" 00:06:50.659 ], 00:06:50.660 "assigned_rate_limits": { 00:06:50.660 "r_mbytes_per_sec": 0, 00:06:50.660 "rw_ios_per_sec": 0, 00:06:50.660 "rw_mbytes_per_sec": 0, 00:06:50.660 "w_mbytes_per_sec": 0 00:06:50.660 }, 00:06:50.660 "block_size": 512, 00:06:50.660 "claim_type": "exclusive_write", 00:06:50.660 "claimed": true, 00:06:50.660 "driver_specific": {}, 00:06:50.660 "memory_domains": [ 00:06:50.660 { 00:06:50.660 "dma_device_id": "system", 00:06:50.660 "dma_device_type": 1 00:06:50.660 }, 00:06:50.660 { 00:06:50.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:50.660 "dma_device_type": 2 00:06:50.660 } 00:06:50.660 ], 00:06:50.660 "name": "Malloc1", 00:06:50.660 "num_blocks": 1048576, 00:06:50.660 "product_name": "Malloc disk", 00:06:50.660 "supported_io_types": { 00:06:50.660 "abort": true, 00:06:50.660 "compare": false, 00:06:50.660 "compare_and_write": false, 00:06:50.660 "copy": true, 00:06:50.660 "flush": true, 00:06:50.660 "get_zone_info": false, 00:06:50.660 "nvme_admin": false, 00:06:50.660 "nvme_io": false, 00:06:50.660 "nvme_io_md": false, 00:06:50.660 "nvme_iov_md": false, 00:06:50.660 "read": true, 00:06:50.660 "reset": true, 00:06:50.660 "seek_data": false, 00:06:50.660 "seek_hole": false, 00:06:50.660 "unmap": true, 00:06:50.660 "write": true, 00:06:50.660 "write_zeroes": true, 00:06:50.660 "zcopy": true, 00:06:50.660 "zone_append": false, 00:06:50.660 "zone_management": false 00:06:50.660 }, 00:06:50.660 "uuid": "ff81303f-bf27-4fb0-82a2-fa978b06103c", 00:06:50.660 "zoned": false 00:06:50.660 } 00:06:50.660 ]' 00:06:50.660 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:50.660 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:50.660 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:50.918 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:50.918 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:50.918 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:50.918 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:50.918 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid=a185c444-aaeb-4d13-aa60-df1b0266600d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:50.918 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:50.918 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:50.918 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:50.918 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:50.918 15:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:53.448 15:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:53.448 15:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:53.448 15:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:53.448 15:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:53.448 15:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:53.448 15:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:53.448 15:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:53.448 15:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:53.448 15:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:53.448 15:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:53.448 15:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:53.448 15:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:53.448 15:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:53.448 15:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:53.448 15:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:53.448 15:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:53.448 15:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:53.448 15:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:53.448 15:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:54.014 15:52:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:54.014 15:52:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:54.014 15:52:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:54.014 15:52:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.014 15:52:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:54.014 ************************************ 00:06:54.014 START TEST filesystem_in_capsule_ext4 00:06:54.014 ************************************ 00:06:54.014 15:52:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:54.014 15:52:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:54.014 15:52:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:54.014 15:52:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:54.273 15:52:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:54.273 15:52:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:54.273 15:52:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:54.273 15:52:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:54.273 15:52:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:54.273 15:52:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:54.273 15:52:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:54.273 mke2fs 1.46.5 (30-Dec-2021) 00:06:54.273 Discarding device blocks: 0/522240 done 00:06:54.273 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:54.273 Filesystem UUID: f27374b3-4dee-4a61-a42e-d3d429b90ee2 00:06:54.273 Superblock backups stored on blocks: 00:06:54.273 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:54.273 00:06:54.273 Allocating group tables: 0/64 done 00:06:54.273 Writing inode tables: 0/64 done 00:06:54.273 Creating journal (8192 blocks): done 00:06:54.273 Writing superblocks and filesystem accounting information: 0/64 done 00:06:54.273 00:06:54.273 15:52:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:54.273 15:52:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:54.273 15:52:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:54.531 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:54.531 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:54.531 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:54.531 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:54.531 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:54.531 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 65854 00:06:54.531 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:54.531 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:54.531 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:54.531 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:54.531 00:06:54.531 real 0m0.336s 00:06:54.531 user 0m0.019s 00:06:54.531 sys 0m0.053s 00:06:54.531 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.531 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:54.531 ************************************ 00:06:54.531 END TEST filesystem_in_capsule_ext4 00:06:54.531 ************************************ 00:06:54.531 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:54.531 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:54.531 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:54.531 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.531 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:54.531 ************************************ 00:06:54.531 START TEST filesystem_in_capsule_btrfs 00:06:54.531 ************************************ 00:06:54.531 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:54.531 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:54.531 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:54.531 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:54.531 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:54.531 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:54.531 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:54.531 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:54.531 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:54.531 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:54.531 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:54.789 btrfs-progs v6.6.2 00:06:54.789 See https://btrfs.readthedocs.io for more information. 00:06:54.789 00:06:54.789 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:54.789 NOTE: several default settings have changed in version 5.15, please make sure 00:06:54.789 this does not affect your deployments: 00:06:54.789 - DUP for metadata (-m dup) 00:06:54.789 - enabled no-holes (-O no-holes) 00:06:54.789 - enabled free-space-tree (-R free-space-tree) 00:06:54.789 00:06:54.789 Label: (null) 00:06:54.789 UUID: 99ba19d8-d1ca-4208-8017-3ec094011d74 00:06:54.789 Node size: 16384 00:06:54.789 Sector size: 4096 00:06:54.789 Filesystem size: 510.00MiB 00:06:54.789 Block group profiles: 00:06:54.789 Data: single 8.00MiB 00:06:54.789 Metadata: DUP 32.00MiB 00:06:54.789 System: DUP 8.00MiB 00:06:54.789 SSD detected: yes 00:06:54.789 Zoned device: no 00:06:54.789 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:54.789 Runtime features: free-space-tree 00:06:54.789 Checksum: crc32c 00:06:54.789 Number of devices: 1 00:06:54.789 Devices: 00:06:54.789 ID SIZE PATH 00:06:54.789 1 510.00MiB /dev/nvme0n1p1 00:06:54.789 00:06:54.789 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:54.789 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:54.789 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:54.789 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:54.789 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:54.789 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:54.789 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:54.789 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:54.789 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 65854 00:06:54.789 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:54.789 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:54.789 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:54.789 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:54.789 00:06:54.789 real 0m0.268s 00:06:54.789 user 0m0.018s 00:06:54.789 sys 0m0.073s 00:06:54.789 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.789 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:54.789 ************************************ 00:06:54.789 END TEST filesystem_in_capsule_btrfs 00:06:54.789 ************************************ 00:06:54.789 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:54.789 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:54.790 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:54.790 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.790 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:54.790 ************************************ 00:06:54.790 START TEST filesystem_in_capsule_xfs 00:06:54.790 ************************************ 00:06:54.790 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:54.790 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:54.790 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:54.790 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:54.790 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:54.790 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:54.790 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:54.790 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:06:54.790 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:54.790 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:54.790 15:52:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:55.048 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:55.048 = sectsz=512 attr=2, projid32bit=1 00:06:55.048 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:55.048 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:55.048 data = bsize=4096 blocks=130560, imaxpct=25 00:06:55.048 = sunit=0 swidth=0 blks 00:06:55.048 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:55.048 log =internal log bsize=4096 blocks=16384, version=2 00:06:55.048 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:55.048 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:55.614 Discarding blocks...Done. 00:06:55.614 15:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:55.614 15:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:57.519 15:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:57.519 15:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:06:57.519 15:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:57.519 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:06:57.519 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:06:57.519 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:57.519 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 65854 00:06:57.519 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:57.519 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:57.519 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:57.519 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:57.519 ************************************ 00:06:57.519 END TEST filesystem_in_capsule_xfs 00:06:57.519 ************************************ 00:06:57.519 00:06:57.519 real 0m2.601s 00:06:57.519 user 0m0.024s 00:06:57.519 sys 0m0.055s 00:06:57.520 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.520 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:57.520 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:57.520 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:57.520 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:57.520 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:57.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:57.520 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:57.520 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:57.520 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:57.520 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:57.520 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:57.520 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:57.520 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:57.520 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:57.520 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.520 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:57.520 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.520 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:57.520 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 65854 00:06:57.520 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 65854 ']' 00:06:57.520 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 65854 00:06:57.520 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:57.520 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:57.520 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65854 00:06:57.520 killing process with pid 65854 00:06:57.520 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:57.520 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:57.520 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65854' 00:06:57.520 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 65854 00:06:57.520 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 65854 00:06:58.085 ************************************ 00:06:58.085 END TEST nvmf_filesystem_in_capsule 00:06:58.085 ************************************ 00:06:58.085 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:58.085 00:06:58.085 real 0m8.641s 00:06:58.085 user 0m32.459s 00:06:58.085 sys 0m1.579s 00:06:58.085 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.085 15:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:58.085 15:52:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:58.085 15:52:51 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:06:58.085 15:52:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:58.085 15:52:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:06:58.085 15:52:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:58.085 15:52:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:06:58.085 15:52:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:58.085 15:52:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:58.085 rmmod nvme_tcp 00:06:58.085 rmmod nvme_fabrics 00:06:58.085 rmmod nvme_keyring 00:06:58.085 15:52:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:58.085 15:52:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:06:58.085 15:52:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:06:58.085 15:52:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:58.085 15:52:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:58.085 15:52:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:58.085 15:52:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:58.085 15:52:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:58.085 15:52:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:58.085 15:52:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:58.085 15:52:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:58.085 15:52:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:58.085 15:52:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:06:58.085 ************************************ 00:06:58.085 END TEST nvmf_filesystem 00:06:58.085 ************************************ 00:06:58.085 00:06:58.085 real 0m18.769s 00:06:58.085 user 1m7.937s 00:06:58.085 sys 0m3.468s 00:06:58.085 15:52:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.085 15:52:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:58.344 15:52:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:58.344 15:52:51 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:58.344 15:52:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:58.344 15:52:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.344 15:52:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:58.344 ************************************ 00:06:58.344 START TEST nvmf_target_discovery 00:06:58.344 ************************************ 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:58.344 * Looking for test storage... 00:06:58.344 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:58.344 15:52:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:58.344 Cannot find device "nvmf_tgt_br" 00:06:58.344 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # true 00:06:58.344 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:58.344 Cannot find device "nvmf_tgt_br2" 00:06:58.344 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # true 00:06:58.344 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:58.344 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:58.344 Cannot find device "nvmf_tgt_br" 00:06:58.344 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # true 00:06:58.344 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:58.344 Cannot find device "nvmf_tgt_br2" 00:06:58.344 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # true 00:06:58.344 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:58.603 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:58.603 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:58.603 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:58.603 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:06:58.603 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:58.603 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:58.603 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:06:58.603 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:58.603 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:58.603 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:58.603 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:58.603 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:58.603 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:58.603 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:58.603 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:58.603 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:58.603 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:58.603 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:58.603 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:58.603 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:58.603 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:58.603 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:58.603 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:58.603 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:58.603 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:58.603 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:58.603 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:58.603 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:58.603 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:58.603 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:58.603 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:58.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:58.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:06:58.603 00:06:58.603 --- 10.0.0.2 ping statistics --- 00:06:58.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.603 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:06:58.603 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:58.603 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:58.603 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:06:58.603 00:06:58.603 --- 10.0.0.3 ping statistics --- 00:06:58.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.603 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:06:58.603 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:58.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:58.604 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:06:58.604 00:06:58.604 --- 10.0.0.1 ping statistics --- 00:06:58.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.604 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:06:58.604 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:58.604 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@433 -- # return 0 00:06:58.604 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:58.604 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:58.604 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:58.604 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:58.604 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:58.604 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:58.604 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:58.861 15:52:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:58.861 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:58.861 15:52:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:58.861 15:52:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:58.861 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=66313 00:06:58.861 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 66313 00:06:58.861 15:52:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 66313 ']' 00:06:58.861 15:52:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:58.861 15:52:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.862 15:52:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.862 15:52:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.862 15:52:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.862 15:52:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:58.862 [2024-07-15 15:52:52.409174] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:06:58.862 [2024-07-15 15:52:52.410060] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:58.862 [2024-07-15 15:52:52.550947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:59.119 [2024-07-15 15:52:52.687097] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:59.119 [2024-07-15 15:52:52.687397] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:59.120 [2024-07-15 15:52:52.687626] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:59.120 [2024-07-15 15:52:52.687770] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:59.120 [2024-07-15 15:52:52.687814] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:59.120 [2024-07-15 15:52:52.688086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.120 [2024-07-15 15:52:52.688249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.120 [2024-07-15 15:52:52.688394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:59.120 [2024-07-15 15:52:52.688399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.686 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:59.686 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:06:59.686 15:52:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:59.686 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:59.686 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:59.945 [2024-07-15 15:52:53.436930] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:59.945 Null1 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:59.945 [2024-07-15 15:52:53.502500] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:59.945 Null2 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:59.945 Null3 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:59.945 Null4 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.945 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid=a185c444-aaeb-4d13-aa60-df1b0266600d -t tcp -a 10.0.0.2 -s 4420 00:07:00.204 00:07:00.204 Discovery Log Number of Records 6, Generation counter 6 00:07:00.204 =====Discovery Log Entry 0====== 00:07:00.204 trtype: tcp 00:07:00.204 adrfam: ipv4 00:07:00.204 subtype: current discovery subsystem 00:07:00.204 treq: not required 00:07:00.204 portid: 0 00:07:00.204 trsvcid: 4420 00:07:00.204 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:00.204 traddr: 10.0.0.2 00:07:00.204 eflags: explicit discovery connections, duplicate discovery information 00:07:00.204 sectype: none 00:07:00.204 =====Discovery Log Entry 1====== 00:07:00.204 trtype: tcp 00:07:00.204 adrfam: ipv4 00:07:00.204 subtype: nvme subsystem 00:07:00.204 treq: not required 00:07:00.204 portid: 0 00:07:00.204 trsvcid: 4420 00:07:00.204 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:00.204 traddr: 10.0.0.2 00:07:00.204 eflags: none 00:07:00.204 sectype: none 00:07:00.204 =====Discovery Log Entry 2====== 00:07:00.204 trtype: tcp 00:07:00.204 adrfam: ipv4 00:07:00.204 subtype: nvme subsystem 00:07:00.204 treq: not required 00:07:00.204 portid: 0 00:07:00.204 trsvcid: 4420 00:07:00.204 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:00.204 traddr: 10.0.0.2 00:07:00.204 eflags: none 00:07:00.204 sectype: none 00:07:00.204 =====Discovery Log Entry 3====== 00:07:00.204 trtype: tcp 00:07:00.204 adrfam: ipv4 00:07:00.204 subtype: nvme subsystem 00:07:00.204 treq: not required 00:07:00.204 portid: 0 00:07:00.204 trsvcid: 4420 00:07:00.204 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:00.204 traddr: 10.0.0.2 00:07:00.204 eflags: none 00:07:00.204 sectype: none 00:07:00.204 =====Discovery Log Entry 4====== 00:07:00.204 trtype: tcp 00:07:00.204 adrfam: ipv4 00:07:00.204 subtype: nvme subsystem 00:07:00.204 treq: not required 00:07:00.204 portid: 0 00:07:00.204 trsvcid: 4420 00:07:00.204 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:00.204 traddr: 10.0.0.2 00:07:00.204 eflags: none 00:07:00.204 sectype: none 00:07:00.204 =====Discovery Log Entry 5====== 00:07:00.204 trtype: tcp 00:07:00.204 adrfam: ipv4 00:07:00.204 subtype: discovery subsystem referral 00:07:00.204 treq: not required 00:07:00.204 portid: 0 00:07:00.204 trsvcid: 4430 00:07:00.204 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:00.204 traddr: 10.0.0.2 00:07:00.204 eflags: none 00:07:00.204 sectype: none 00:07:00.204 Perform nvmf subsystem discovery via RPC 00:07:00.204 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:00.204 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:00.204 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.205 [ 00:07:00.205 { 00:07:00.205 "allow_any_host": true, 00:07:00.205 "hosts": [], 00:07:00.205 "listen_addresses": [ 00:07:00.205 { 00:07:00.205 "adrfam": "IPv4", 00:07:00.205 "traddr": "10.0.0.2", 00:07:00.205 "trsvcid": "4420", 00:07:00.205 "trtype": "TCP" 00:07:00.205 } 00:07:00.205 ], 00:07:00.205 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:00.205 "subtype": "Discovery" 00:07:00.205 }, 00:07:00.205 { 00:07:00.205 "allow_any_host": true, 00:07:00.205 "hosts": [], 00:07:00.205 "listen_addresses": [ 00:07:00.205 { 00:07:00.205 "adrfam": "IPv4", 00:07:00.205 "traddr": "10.0.0.2", 00:07:00.205 "trsvcid": "4420", 00:07:00.205 "trtype": "TCP" 00:07:00.205 } 00:07:00.205 ], 00:07:00.205 "max_cntlid": 65519, 00:07:00.205 "max_namespaces": 32, 00:07:00.205 "min_cntlid": 1, 00:07:00.205 "model_number": "SPDK bdev Controller", 00:07:00.205 "namespaces": [ 00:07:00.205 { 00:07:00.205 "bdev_name": "Null1", 00:07:00.205 "name": "Null1", 00:07:00.205 "nguid": "F1E2B1AF4ED944D78A2996AEF1A50333", 00:07:00.205 "nsid": 1, 00:07:00.205 "uuid": "f1e2b1af-4ed9-44d7-8a29-96aef1a50333" 00:07:00.205 } 00:07:00.205 ], 00:07:00.205 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:00.205 "serial_number": "SPDK00000000000001", 00:07:00.205 "subtype": "NVMe" 00:07:00.205 }, 00:07:00.205 { 00:07:00.205 "allow_any_host": true, 00:07:00.205 "hosts": [], 00:07:00.205 "listen_addresses": [ 00:07:00.205 { 00:07:00.205 "adrfam": "IPv4", 00:07:00.205 "traddr": "10.0.0.2", 00:07:00.205 "trsvcid": "4420", 00:07:00.205 "trtype": "TCP" 00:07:00.205 } 00:07:00.205 ], 00:07:00.205 "max_cntlid": 65519, 00:07:00.205 "max_namespaces": 32, 00:07:00.205 "min_cntlid": 1, 00:07:00.205 "model_number": "SPDK bdev Controller", 00:07:00.205 "namespaces": [ 00:07:00.205 { 00:07:00.205 "bdev_name": "Null2", 00:07:00.205 "name": "Null2", 00:07:00.205 "nguid": "DE818CDD04994AD6B52F448B7E33A685", 00:07:00.205 "nsid": 1, 00:07:00.205 "uuid": "de818cdd-0499-4ad6-b52f-448b7e33a685" 00:07:00.205 } 00:07:00.205 ], 00:07:00.205 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:00.205 "serial_number": "SPDK00000000000002", 00:07:00.205 "subtype": "NVMe" 00:07:00.205 }, 00:07:00.205 { 00:07:00.205 "allow_any_host": true, 00:07:00.205 "hosts": [], 00:07:00.205 "listen_addresses": [ 00:07:00.205 { 00:07:00.205 "adrfam": "IPv4", 00:07:00.205 "traddr": "10.0.0.2", 00:07:00.205 "trsvcid": "4420", 00:07:00.205 "trtype": "TCP" 00:07:00.205 } 00:07:00.205 ], 00:07:00.205 "max_cntlid": 65519, 00:07:00.205 "max_namespaces": 32, 00:07:00.205 "min_cntlid": 1, 00:07:00.205 "model_number": "SPDK bdev Controller", 00:07:00.205 "namespaces": [ 00:07:00.205 { 00:07:00.205 "bdev_name": "Null3", 00:07:00.205 "name": "Null3", 00:07:00.205 "nguid": "EE08C113BEB44001BC4B08BE956597EF", 00:07:00.205 "nsid": 1, 00:07:00.205 "uuid": "ee08c113-beb4-4001-bc4b-08be956597ef" 00:07:00.205 } 00:07:00.205 ], 00:07:00.205 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:00.205 "serial_number": "SPDK00000000000003", 00:07:00.205 "subtype": "NVMe" 00:07:00.205 }, 00:07:00.205 { 00:07:00.205 "allow_any_host": true, 00:07:00.205 "hosts": [], 00:07:00.205 "listen_addresses": [ 00:07:00.205 { 00:07:00.205 "adrfam": "IPv4", 00:07:00.205 "traddr": "10.0.0.2", 00:07:00.205 "trsvcid": "4420", 00:07:00.205 "trtype": "TCP" 00:07:00.205 } 00:07:00.205 ], 00:07:00.205 "max_cntlid": 65519, 00:07:00.205 "max_namespaces": 32, 00:07:00.205 "min_cntlid": 1, 00:07:00.205 "model_number": "SPDK bdev Controller", 00:07:00.205 "namespaces": [ 00:07:00.205 { 00:07:00.205 "bdev_name": "Null4", 00:07:00.205 "name": "Null4", 00:07:00.205 "nguid": "D65A3AB6FB084C7CB313101161342D0D", 00:07:00.205 "nsid": 1, 00:07:00.205 "uuid": "d65a3ab6-fb08-4c7c-b313-101161342d0d" 00:07:00.205 } 00:07:00.205 ], 00:07:00.205 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:00.205 "serial_number": "SPDK00000000000004", 00:07:00.205 "subtype": "NVMe" 00:07:00.205 } 00:07:00.205 ] 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:00.205 15:52:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:00.205 rmmod nvme_tcp 00:07:00.464 rmmod nvme_fabrics 00:07:00.464 rmmod nvme_keyring 00:07:00.464 15:52:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:00.464 15:52:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:00.464 15:52:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:00.464 15:52:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 66313 ']' 00:07:00.464 15:52:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 66313 00:07:00.464 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 66313 ']' 00:07:00.464 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 66313 00:07:00.464 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:07:00.464 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:00.464 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66313 00:07:00.464 killing process with pid 66313 00:07:00.464 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:00.464 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:00.464 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66313' 00:07:00.464 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 66313 00:07:00.464 15:52:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 66313 00:07:00.723 15:52:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:00.723 15:52:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:00.723 15:52:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:00.723 15:52:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:00.723 15:52:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:00.723 15:52:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:00.723 15:52:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:00.723 15:52:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:00.723 15:52:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:00.723 00:07:00.723 real 0m2.415s 00:07:00.723 user 0m6.472s 00:07:00.723 sys 0m0.625s 00:07:00.723 15:52:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.723 15:52:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.723 ************************************ 00:07:00.723 END TEST nvmf_target_discovery 00:07:00.723 ************************************ 00:07:00.723 15:52:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:00.723 15:52:54 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:00.723 15:52:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:00.723 15:52:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.723 15:52:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:00.723 ************************************ 00:07:00.723 START TEST nvmf_referrals 00:07:00.723 ************************************ 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:00.723 * Looking for test storage... 00:07:00.723 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:00.723 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:00.724 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:00.724 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:00.724 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:00.724 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:00.724 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:00.724 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:00.724 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:00.724 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:00.724 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:00.724 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:00.724 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:00.724 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:00.724 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:00.724 Cannot find device "nvmf_tgt_br" 00:07:00.724 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # true 00:07:00.724 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:00.982 Cannot find device "nvmf_tgt_br2" 00:07:00.982 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # true 00:07:00.982 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:00.982 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:00.982 Cannot find device "nvmf_tgt_br" 00:07:00.982 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # true 00:07:00.982 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:00.982 Cannot find device "nvmf_tgt_br2" 00:07:00.982 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # true 00:07:00.982 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:00.982 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:00.982 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:00.982 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:00.982 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:07:00.982 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:00.982 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:00.982 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:07:00.982 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:00.982 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:00.982 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:00.982 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:00.982 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:00.982 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:00.982 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:00.982 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:00.982 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:00.982 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:00.982 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:00.982 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:00.983 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:00.983 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:00.983 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:00.983 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:00.983 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:00.983 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:00.983 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:00.983 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:00.983 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:00.983 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:01.241 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:01.241 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:01.241 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:01.241 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:07:01.241 00:07:01.241 --- 10.0.0.2 ping statistics --- 00:07:01.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:01.241 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:07:01.241 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:01.241 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:01.241 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:07:01.241 00:07:01.241 --- 10.0.0.3 ping statistics --- 00:07:01.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:01.241 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:07:01.241 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:01.241 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:01.241 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:07:01.241 00:07:01.241 --- 10.0.0.1 ping statistics --- 00:07:01.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:01.241 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:07:01.241 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:01.241 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@433 -- # return 0 00:07:01.241 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:01.241 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:01.241 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:01.241 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:01.241 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:01.241 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:01.241 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:01.241 15:52:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:01.241 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:01.241 15:52:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:01.241 15:52:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.241 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=66541 00:07:01.241 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 66541 00:07:01.241 15:52:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 66541 ']' 00:07:01.241 15:52:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.241 15:52:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:01.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.241 15:52:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.241 15:52:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.241 15:52:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.241 15:52:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.241 [2024-07-15 15:52:54.812519] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:07:01.241 [2024-07-15 15:52:54.812646] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:01.241 [2024-07-15 15:52:54.950672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:01.499 [2024-07-15 15:52:55.076510] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:01.499 [2024-07-15 15:52:55.076795] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:01.499 [2024-07-15 15:52:55.077259] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:01.499 [2024-07-15 15:52:55.077558] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:01.499 [2024-07-15 15:52:55.077755] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:01.499 [2024-07-15 15:52:55.078204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.499 [2024-07-15 15:52:55.078475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.499 [2024-07-15 15:52:55.078277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.499 [2024-07-15 15:52:55.078466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:02.434 15:52:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:02.434 15:52:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:07:02.434 15:52:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:02.434 15:52:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:02.434 15:52:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:02.434 15:52:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:02.434 15:52:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:02.434 15:52:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.434 15:52:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:02.434 [2024-07-15 15:52:55.908974] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:02.434 15:52:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.434 15:52:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:02.434 15:52:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.434 15:52:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:02.434 [2024-07-15 15:52:55.935437] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:02.434 15:52:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.434 15:52:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:02.434 15:52:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.434 15:52:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:02.434 15:52:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.434 15:52:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:02.434 15:52:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.434 15:52:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:02.434 15:52:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.434 15:52:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:02.434 15:52:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.434 15:52:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:02.434 15:52:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.434 15:52:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:02.434 15:52:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.434 15:52:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:02.434 15:52:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:02.434 15:52:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.434 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:02.434 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:02.434 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:02.434 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:02.434 15:52:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.434 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:02.434 15:52:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:02.434 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:02.434 15:52:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.434 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:02.434 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:02.434 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:02.434 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:02.434 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:02.434 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid=a185c444-aaeb-4d13-aa60-df1b0266600d -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:02.434 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:02.434 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid=a185c444-aaeb-4d13-aa60-df1b0266600d -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid=a185c444-aaeb-4d13-aa60-df1b0266600d -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:02.693 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:02.951 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:02.951 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:02.951 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:02.951 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:02.951 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:02.951 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid=a185c444-aaeb-4d13-aa60-df1b0266600d -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:02.951 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:02.951 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:02.951 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:02.951 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:02.951 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:02.951 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid=a185c444-aaeb-4d13-aa60-df1b0266600d -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:02.951 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:02.951 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:02.951 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:02.951 15:52:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.951 15:52:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:02.951 15:52:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.951 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:02.951 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:02.951 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:02.951 15:52:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.951 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:02.951 15:52:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:02.951 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:02.951 15:52:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.209 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:03.209 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:03.209 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:03.209 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:03.210 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:03.210 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid=a185c444-aaeb-4d13-aa60-df1b0266600d -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:03.210 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:03.210 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:03.210 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:03.210 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:03.210 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:03.210 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:03.210 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:03.210 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid=a185c444-aaeb-4d13-aa60-df1b0266600d -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:03.210 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:03.210 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:03.210 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:03.210 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:03.210 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:03.210 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:03.210 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid=a185c444-aaeb-4d13-aa60-df1b0266600d -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:03.210 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:03.210 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:03.210 15:52:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.210 15:52:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:03.210 15:52:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.210 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:03.210 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:03.210 15:52:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.210 15:52:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:03.210 15:52:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.468 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:03.468 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:03.468 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:03.468 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:03.468 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid=a185c444-aaeb-4d13-aa60-df1b0266600d -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:03.468 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:03.468 15:52:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:03.468 15:52:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:03.468 15:52:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:03.468 15:52:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:03.468 15:52:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:03.468 15:52:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:03.468 15:52:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:03.468 15:52:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:03.468 15:52:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:03.468 15:52:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:03.468 15:52:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:03.468 rmmod nvme_tcp 00:07:03.468 rmmod nvme_fabrics 00:07:03.468 rmmod nvme_keyring 00:07:03.468 15:52:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:03.468 15:52:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:03.468 15:52:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:03.468 15:52:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 66541 ']' 00:07:03.468 15:52:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 66541 00:07:03.468 15:52:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 66541 ']' 00:07:03.468 15:52:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 66541 00:07:03.468 15:52:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:07:03.468 15:52:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:03.468 15:52:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66541 00:07:03.468 15:52:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:03.468 15:52:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:03.468 killing process with pid 66541 00:07:03.468 15:52:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66541' 00:07:03.468 15:52:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 66541 00:07:03.468 15:52:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 66541 00:07:03.726 15:52:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:03.726 15:52:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:03.726 15:52:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:03.726 15:52:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:03.726 15:52:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:03.726 15:52:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:03.726 15:52:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:03.726 15:52:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:03.726 15:52:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:03.726 00:07:03.726 real 0m3.118s 00:07:03.726 user 0m10.168s 00:07:03.726 sys 0m0.866s 00:07:03.726 ************************************ 00:07:03.726 END TEST nvmf_referrals 00:07:03.726 ************************************ 00:07:03.726 15:52:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.726 15:52:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:03.985 15:52:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:03.985 15:52:57 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:03.985 15:52:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:03.985 15:52:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.985 15:52:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:03.985 ************************************ 00:07:03.985 START TEST nvmf_connect_disconnect 00:07:03.985 ************************************ 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:03.985 * Looking for test storage... 00:07:03.985 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:03.985 Cannot find device "nvmf_tgt_br" 00:07:03.985 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # true 00:07:03.986 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:03.986 Cannot find device "nvmf_tgt_br2" 00:07:03.986 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # true 00:07:03.986 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:03.986 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:03.986 Cannot find device "nvmf_tgt_br" 00:07:03.986 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # true 00:07:03.986 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:03.986 Cannot find device "nvmf_tgt_br2" 00:07:03.986 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # true 00:07:03.986 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:03.986 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:03.986 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:03.986 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:03.986 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:07:03.986 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:03.986 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:03.986 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:07:03.986 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:04.244 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:04.244 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:07:04.244 00:07:04.244 --- 10.0.0.2 ping statistics --- 00:07:04.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.244 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:04.244 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:04.244 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:07:04.244 00:07:04.244 --- 10.0.0.3 ping statistics --- 00:07:04.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.244 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:04.244 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:04.244 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:07:04.244 00:07:04.244 --- 10.0.0.1 ping statistics --- 00:07:04.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.244 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@433 -- # return 0 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=66844 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 66844 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 66844 ']' 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:04.244 15:52:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:04.503 [2024-07-15 15:52:58.001462] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:07:04.503 [2024-07-15 15:52:58.001573] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.503 [2024-07-15 15:52:58.139046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:04.761 [2024-07-15 15:52:58.275784] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:04.761 [2024-07-15 15:52:58.275878] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:04.761 [2024-07-15 15:52:58.275906] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:04.761 [2024-07-15 15:52:58.275917] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:04.761 [2024-07-15 15:52:58.275927] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:04.761 [2024-07-15 15:52:58.276119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.761 [2024-07-15 15:52:58.276801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.761 [2024-07-15 15:52:58.276992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:04.761 [2024-07-15 15:52:58.276999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.326 15:52:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:05.326 15:52:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:07:05.326 15:52:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:05.326 15:52:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:05.326 15:52:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:05.584 15:52:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:05.584 15:52:59 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:05.584 15:52:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.584 15:52:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:05.584 [2024-07-15 15:52:59.066491] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:05.584 15:52:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.584 15:52:59 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:05.584 15:52:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.584 15:52:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:05.584 15:52:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.584 15:52:59 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:05.584 15:52:59 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:05.584 15:52:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.584 15:52:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:05.584 15:52:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.584 15:52:59 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:05.584 15:52:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.584 15:52:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:05.584 15:52:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.584 15:52:59 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:05.584 15:52:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.584 15:52:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:05.584 [2024-07-15 15:52:59.142368] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:05.584 15:52:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.584 15:52:59 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:05.584 15:52:59 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:05.584 15:52:59 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:08.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:10.026 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:12.565 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:14.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:16.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:16.994 15:53:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:16.994 15:53:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:16.994 15:53:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:16.994 15:53:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:07:17.560 15:53:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:17.560 15:53:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:07:17.560 15:53:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:17.560 15:53:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:17.560 rmmod nvme_tcp 00:07:17.560 rmmod nvme_fabrics 00:07:17.560 rmmod nvme_keyring 00:07:17.560 15:53:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:17.560 15:53:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:07:17.560 15:53:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:07:17.560 15:53:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 66844 ']' 00:07:17.560 15:53:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 66844 00:07:17.560 15:53:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 66844 ']' 00:07:17.560 15:53:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 66844 00:07:17.560 15:53:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:07:17.560 15:53:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:17.560 15:53:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66844 00:07:17.560 15:53:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:17.560 15:53:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:17.560 killing process with pid 66844 00:07:17.560 15:53:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66844' 00:07:17.560 15:53:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 66844 00:07:17.560 15:53:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 66844 00:07:17.818 15:53:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:17.818 15:53:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:17.818 15:53:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:17.818 15:53:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:17.818 15:53:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:17.818 15:53:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:17.818 15:53:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:17.819 15:53:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:17.819 15:53:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:17.819 00:07:17.819 real 0m13.933s 00:07:17.819 user 0m51.267s 00:07:17.819 sys 0m1.918s 00:07:17.819 15:53:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.819 15:53:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:17.819 ************************************ 00:07:17.819 END TEST nvmf_connect_disconnect 00:07:17.819 ************************************ 00:07:17.819 15:53:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:17.819 15:53:11 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:17.819 15:53:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:17.819 15:53:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.819 15:53:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:17.819 ************************************ 00:07:17.819 START TEST nvmf_multitarget 00:07:17.819 ************************************ 00:07:17.819 15:53:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:17.819 * Looking for test storage... 00:07:17.819 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:17.819 15:53:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:17.819 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:07:17.819 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:17.819 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:17.819 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:17.819 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:17.819 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:17.819 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:17.819 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:17.819 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:17.819 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:17.819 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:18.077 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:07:18.077 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:07:18.077 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:18.077 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:18.077 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:18.077 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:18.077 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:18.077 15:53:11 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.077 15:53:11 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.077 15:53:11 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.077 15:53:11 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.077 15:53:11 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.077 15:53:11 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.077 15:53:11 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:07:18.077 15:53:11 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.077 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:07:18.077 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:18.077 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:18.077 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:18.078 Cannot find device "nvmf_tgt_br" 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # true 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:18.078 Cannot find device "nvmf_tgt_br2" 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # true 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:18.078 Cannot find device "nvmf_tgt_br" 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # true 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:18.078 Cannot find device "nvmf_tgt_br2" 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # true 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:18.078 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:18.078 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:18.078 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:18.336 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:18.336 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:18.336 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:18.336 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:18.336 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:18.336 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:18.336 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:18.336 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:18.336 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:18.336 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:18.336 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:18.336 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:18.336 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:18.336 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:18.336 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:18.336 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:18.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:18.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:07:18.336 00:07:18.336 --- 10.0.0.2 ping statistics --- 00:07:18.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.336 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:07:18.336 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:18.336 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:18.336 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:07:18.336 00:07:18.336 --- 10.0.0.3 ping statistics --- 00:07:18.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.336 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:07:18.336 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:18.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:18.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:07:18.336 00:07:18.336 --- 10.0.0.1 ping statistics --- 00:07:18.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.336 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:07:18.337 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:18.337 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@433 -- # return 0 00:07:18.337 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:18.337 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:18.337 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:18.337 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:18.337 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:18.337 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:18.337 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:18.337 15:53:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:18.337 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:18.337 15:53:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:18.337 15:53:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:18.337 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=67253 00:07:18.337 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:18.337 15:53:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 67253 00:07:18.337 15:53:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 67253 ']' 00:07:18.337 15:53:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.337 15:53:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:18.337 15:53:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.337 15:53:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:18.337 15:53:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:18.337 [2024-07-15 15:53:11.999527] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:07:18.337 [2024-07-15 15:53:11.999660] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.594 [2024-07-15 15:53:12.138936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:18.594 [2024-07-15 15:53:12.260634] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:18.594 [2024-07-15 15:53:12.260726] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:18.594 [2024-07-15 15:53:12.260753] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:18.594 [2024-07-15 15:53:12.260777] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:18.594 [2024-07-15 15:53:12.260784] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:18.594 [2024-07-15 15:53:12.260917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.594 [2024-07-15 15:53:12.261080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.594 [2024-07-15 15:53:12.261264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.594 [2024-07-15 15:53:12.261262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:19.527 15:53:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:19.527 15:53:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:07:19.527 15:53:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:19.527 15:53:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:19.527 15:53:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:19.527 15:53:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:19.527 15:53:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:19.527 15:53:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:19.527 15:53:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:07:19.528 15:53:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:19.528 15:53:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:19.785 "nvmf_tgt_1" 00:07:19.785 15:53:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:19.785 "nvmf_tgt_2" 00:07:19.785 15:53:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:19.785 15:53:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:07:20.042 15:53:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:20.042 15:53:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:20.042 true 00:07:20.042 15:53:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:20.303 true 00:07:20.303 15:53:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:20.303 15:53:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:07:20.303 15:53:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:20.303 15:53:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:20.303 15:53:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:07:20.303 15:53:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:20.303 15:53:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:07:20.303 15:53:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:20.303 15:53:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:07:20.303 15:53:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:20.303 15:53:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:20.303 rmmod nvme_tcp 00:07:20.303 rmmod nvme_fabrics 00:07:20.560 rmmod nvme_keyring 00:07:20.560 15:53:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:20.560 15:53:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:07:20.560 15:53:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:07:20.560 15:53:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 67253 ']' 00:07:20.560 15:53:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 67253 00:07:20.560 15:53:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 67253 ']' 00:07:20.560 15:53:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 67253 00:07:20.560 15:53:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:07:20.560 15:53:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:20.560 15:53:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67253 00:07:20.560 killing process with pid 67253 00:07:20.560 15:53:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:20.560 15:53:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:20.560 15:53:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67253' 00:07:20.560 15:53:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 67253 00:07:20.560 15:53:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 67253 00:07:20.818 15:53:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:20.818 15:53:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:20.818 15:53:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:20.818 15:53:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:20.818 15:53:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:20.818 15:53:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.818 15:53:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:20.818 15:53:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.818 15:53:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:20.818 00:07:20.818 real 0m2.916s 00:07:20.818 user 0m9.386s 00:07:20.818 sys 0m0.699s 00:07:20.818 15:53:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.818 15:53:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:20.818 ************************************ 00:07:20.818 END TEST nvmf_multitarget 00:07:20.818 ************************************ 00:07:20.818 15:53:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:20.818 15:53:14 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:20.818 15:53:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:20.818 15:53:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.818 15:53:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:20.818 ************************************ 00:07:20.818 START TEST nvmf_rpc 00:07:20.818 ************************************ 00:07:20.818 15:53:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:20.818 * Looking for test storage... 00:07:20.818 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:20.818 15:53:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:20.818 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:07:20.818 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:20.818 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:20.818 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:20.818 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:20.818 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:20.818 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:20.818 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:20.818 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:20.818 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:20.818 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:20.818 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:07:20.818 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:07:20.818 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:20.818 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:20.818 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:20.818 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:20.818 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:20.818 15:53:14 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.818 15:53:14 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.818 15:53:14 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.818 15:53:14 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:20.819 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:21.077 Cannot find device "nvmf_tgt_br" 00:07:21.077 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # true 00:07:21.077 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:21.077 Cannot find device "nvmf_tgt_br2" 00:07:21.077 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # true 00:07:21.077 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:21.077 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:21.077 Cannot find device "nvmf_tgt_br" 00:07:21.077 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # true 00:07:21.077 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:21.077 Cannot find device "nvmf_tgt_br2" 00:07:21.077 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # true 00:07:21.077 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:21.077 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:21.077 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:21.077 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:21.077 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:07:21.077 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:21.077 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:21.077 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:07:21.077 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:21.077 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:21.077 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:21.077 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:21.077 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:21.077 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:21.077 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:21.077 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:21.077 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:21.077 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:21.077 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:21.077 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:21.077 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:21.077 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:21.077 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:21.077 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:21.077 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:21.077 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:21.077 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:21.335 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:21.336 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:21.336 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:21.336 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:21.336 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:21.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:21.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:07:21.336 00:07:21.336 --- 10.0.0.2 ping statistics --- 00:07:21.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.336 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:07:21.336 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:21.336 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:21.336 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:07:21.336 00:07:21.336 --- 10.0.0.3 ping statistics --- 00:07:21.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.336 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:07:21.336 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:21.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:21.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:07:21.336 00:07:21.336 --- 10.0.0.1 ping statistics --- 00:07:21.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.336 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:07:21.336 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:21.336 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@433 -- # return 0 00:07:21.336 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:21.336 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:21.336 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:21.336 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:21.336 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:21.336 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:21.336 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:21.336 15:53:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:21.336 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:21.336 15:53:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:21.336 15:53:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.336 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=67485 00:07:21.336 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 67485 00:07:21.336 15:53:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:21.336 15:53:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 67485 ']' 00:07:21.336 15:53:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.336 15:53:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:21.336 15:53:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.336 15:53:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:21.336 15:53:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.336 [2024-07-15 15:53:14.950507] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:07:21.336 [2024-07-15 15:53:14.950608] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.594 [2024-07-15 15:53:15.087624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:21.594 [2024-07-15 15:53:15.214171] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:21.594 [2024-07-15 15:53:15.214229] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:21.594 [2024-07-15 15:53:15.214241] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:21.594 [2024-07-15 15:53:15.214250] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:21.594 [2024-07-15 15:53:15.214257] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:21.594 [2024-07-15 15:53:15.214425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.594 [2024-07-15 15:53:15.215201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.594 [2024-07-15 15:53:15.215316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.594 [2024-07-15 15:53:15.215318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.526 15:53:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:22.526 15:53:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:22.526 15:53:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:22.526 15:53:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:22.527 15:53:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.527 15:53:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:22.527 15:53:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:22.527 15:53:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.527 15:53:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.527 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.527 15:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:07:22.527 "poll_groups": [ 00:07:22.527 { 00:07:22.527 "admin_qpairs": 0, 00:07:22.527 "completed_nvme_io": 0, 00:07:22.527 "current_admin_qpairs": 0, 00:07:22.527 "current_io_qpairs": 0, 00:07:22.527 "io_qpairs": 0, 00:07:22.527 "name": "nvmf_tgt_poll_group_000", 00:07:22.527 "pending_bdev_io": 0, 00:07:22.527 "transports": [] 00:07:22.527 }, 00:07:22.527 { 00:07:22.527 "admin_qpairs": 0, 00:07:22.527 "completed_nvme_io": 0, 00:07:22.527 "current_admin_qpairs": 0, 00:07:22.527 "current_io_qpairs": 0, 00:07:22.527 "io_qpairs": 0, 00:07:22.527 "name": "nvmf_tgt_poll_group_001", 00:07:22.527 "pending_bdev_io": 0, 00:07:22.527 "transports": [] 00:07:22.527 }, 00:07:22.527 { 00:07:22.527 "admin_qpairs": 0, 00:07:22.527 "completed_nvme_io": 0, 00:07:22.527 "current_admin_qpairs": 0, 00:07:22.527 "current_io_qpairs": 0, 00:07:22.527 "io_qpairs": 0, 00:07:22.527 "name": "nvmf_tgt_poll_group_002", 00:07:22.527 "pending_bdev_io": 0, 00:07:22.527 "transports": [] 00:07:22.527 }, 00:07:22.527 { 00:07:22.527 "admin_qpairs": 0, 00:07:22.527 "completed_nvme_io": 0, 00:07:22.527 "current_admin_qpairs": 0, 00:07:22.527 "current_io_qpairs": 0, 00:07:22.527 "io_qpairs": 0, 00:07:22.527 "name": "nvmf_tgt_poll_group_003", 00:07:22.527 "pending_bdev_io": 0, 00:07:22.527 "transports": [] 00:07:22.527 } 00:07:22.527 ], 00:07:22.527 "tick_rate": 2200000000 00:07:22.527 }' 00:07:22.527 15:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:22.527 15:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:22.527 15:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:07:22.527 15:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:22.527 15:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:22.527 15:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:22.527 15:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:22.527 15:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:22.527 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.527 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.527 [2024-07-15 15:53:16.129087] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:22.527 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.527 15:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:22.527 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.527 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.527 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.527 15:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:07:22.527 "poll_groups": [ 00:07:22.527 { 00:07:22.527 "admin_qpairs": 0, 00:07:22.527 "completed_nvme_io": 0, 00:07:22.527 "current_admin_qpairs": 0, 00:07:22.527 "current_io_qpairs": 0, 00:07:22.527 "io_qpairs": 0, 00:07:22.527 "name": "nvmf_tgt_poll_group_000", 00:07:22.527 "pending_bdev_io": 0, 00:07:22.527 "transports": [ 00:07:22.527 { 00:07:22.527 "trtype": "TCP" 00:07:22.527 } 00:07:22.527 ] 00:07:22.527 }, 00:07:22.527 { 00:07:22.527 "admin_qpairs": 0, 00:07:22.527 "completed_nvme_io": 0, 00:07:22.527 "current_admin_qpairs": 0, 00:07:22.527 "current_io_qpairs": 0, 00:07:22.527 "io_qpairs": 0, 00:07:22.527 "name": "nvmf_tgt_poll_group_001", 00:07:22.527 "pending_bdev_io": 0, 00:07:22.527 "transports": [ 00:07:22.527 { 00:07:22.527 "trtype": "TCP" 00:07:22.527 } 00:07:22.527 ] 00:07:22.527 }, 00:07:22.527 { 00:07:22.527 "admin_qpairs": 0, 00:07:22.527 "completed_nvme_io": 0, 00:07:22.527 "current_admin_qpairs": 0, 00:07:22.527 "current_io_qpairs": 0, 00:07:22.527 "io_qpairs": 0, 00:07:22.527 "name": "nvmf_tgt_poll_group_002", 00:07:22.527 "pending_bdev_io": 0, 00:07:22.527 "transports": [ 00:07:22.527 { 00:07:22.527 "trtype": "TCP" 00:07:22.527 } 00:07:22.527 ] 00:07:22.527 }, 00:07:22.527 { 00:07:22.527 "admin_qpairs": 0, 00:07:22.527 "completed_nvme_io": 0, 00:07:22.527 "current_admin_qpairs": 0, 00:07:22.527 "current_io_qpairs": 0, 00:07:22.527 "io_qpairs": 0, 00:07:22.527 "name": "nvmf_tgt_poll_group_003", 00:07:22.527 "pending_bdev_io": 0, 00:07:22.527 "transports": [ 00:07:22.527 { 00:07:22.527 "trtype": "TCP" 00:07:22.527 } 00:07:22.527 ] 00:07:22.527 } 00:07:22.527 ], 00:07:22.527 "tick_rate": 2200000000 00:07:22.527 }' 00:07:22.527 15:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:22.527 15:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:22.527 15:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:22.527 15:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:22.527 15:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:22.527 15:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:22.527 15:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:22.527 15:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:22.527 15:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.786 Malloc1 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.786 [2024-07-15 15:53:16.338645] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid=a185c444-aaeb-4d13-aa60-df1b0266600d -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -a 10.0.0.2 -s 4420 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid=a185c444-aaeb-4d13-aa60-df1b0266600d -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -a 10.0.0.2 -s 4420 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid=a185c444-aaeb-4d13-aa60-df1b0266600d -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -a 10.0.0.2 -s 4420 00:07:22.786 [2024-07-15 15:53:16.366950] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d' 00:07:22.786 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:22.786 could not add new controller: failed to write to nvme-fabrics device 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.786 15:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid=a185c444-aaeb-4d13-aa60-df1b0266600d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:23.044 15:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:23.044 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:23.044 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:23.044 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:23.044 15:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:24.943 15:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:24.943 15:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:24.943 15:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:24.943 15:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:24.943 15:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:24.943 15:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:24.943 15:53:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:24.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:24.943 15:53:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:24.943 15:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:24.943 15:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:24.943 15:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:24.943 15:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:24.943 15:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:24.943 15:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:24.943 15:53:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:07:24.943 15:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.943 15:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.943 15:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.943 15:53:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid=a185c444-aaeb-4d13-aa60-df1b0266600d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:24.943 15:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:24.943 15:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid=a185c444-aaeb-4d13-aa60-df1b0266600d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:24.943 15:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:24.943 15:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:24.943 15:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:24.943 15:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:24.943 15:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:24.943 15:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:24.943 15:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:24.943 15:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:24.943 15:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid=a185c444-aaeb-4d13-aa60-df1b0266600d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:24.943 [2024-07-15 15:53:18.667859] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d' 00:07:24.943 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:24.943 could not add new controller: failed to write to nvme-fabrics device 00:07:25.200 15:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:25.200 15:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:25.200 15:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:25.200 15:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:25.200 15:53:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:25.200 15:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.200 15:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.200 15:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.200 15:53:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid=a185c444-aaeb-4d13-aa60-df1b0266600d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:25.200 15:53:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:25.200 15:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:25.200 15:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:25.200 15:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:25.200 15:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:27.726 15:53:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:27.726 15:53:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:27.726 15:53:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:27.726 15:53:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:27.726 15:53:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:27.726 15:53:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:27.726 15:53:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:27.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:27.726 15:53:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:27.726 15:53:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:27.726 15:53:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:27.726 15:53:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:27.726 15:53:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:27.726 15:53:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:27.726 15:53:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:27.726 15:53:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:27.726 15:53:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.726 15:53:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.726 15:53:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.726 15:53:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:07:27.726 15:53:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:27.726 15:53:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:27.726 15:53:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.726 15:53:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.726 15:53:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.726 15:53:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:27.726 15:53:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.726 15:53:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.726 [2024-07-15 15:53:21.065160] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:27.726 15:53:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.726 15:53:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:27.726 15:53:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.726 15:53:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.726 15:53:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.726 15:53:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:27.726 15:53:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.726 15:53:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.726 15:53:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.726 15:53:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid=a185c444-aaeb-4d13-aa60-df1b0266600d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:27.726 15:53:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:27.726 15:53:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:27.726 15:53:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:27.726 15:53:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:27.726 15:53:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:29.642 15:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:29.642 15:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:29.642 15:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:29.642 15:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:29.642 15:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:29.642 15:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:29.642 15:53:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:29.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:29.980 15:53:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:29.980 15:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:29.980 15:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:29.980 15:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:29.980 15:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:29.980 15:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:29.980 15:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:29.980 15:53:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:29.980 15:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.980 15:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.980 15:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.980 15:53:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:29.980 15:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.980 15:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.980 15:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.980 15:53:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:29.980 15:53:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:29.980 15:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.980 15:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.980 15:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.980 15:53:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:29.980 15:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.980 15:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.980 [2024-07-15 15:53:23.468455] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:29.980 15:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.980 15:53:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:29.980 15:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.980 15:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.980 15:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.980 15:53:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:29.980 15:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.980 15:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.980 15:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.980 15:53:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid=a185c444-aaeb-4d13-aa60-df1b0266600d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:30.238 15:53:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:30.238 15:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:30.238 15:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:30.238 15:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:30.238 15:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:32.137 15:53:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:32.137 15:53:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:32.137 15:53:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:32.137 15:53:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:32.137 15:53:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:32.137 15:53:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:32.137 15:53:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:32.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:32.137 15:53:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:32.137 15:53:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:32.137 15:53:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:32.137 15:53:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:32.137 15:53:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:32.137 15:53:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:32.137 15:53:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:32.137 15:53:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:32.137 15:53:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.137 15:53:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.395 15:53:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.395 15:53:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:32.395 15:53:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.395 15:53:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.395 15:53:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.395 15:53:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:32.395 15:53:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:32.395 15:53:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.395 15:53:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.395 15:53:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.395 15:53:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:32.395 15:53:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.395 15:53:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.395 [2024-07-15 15:53:25.887757] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:32.395 15:53:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.395 15:53:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:32.395 15:53:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.395 15:53:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.395 15:53:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.395 15:53:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:32.395 15:53:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.395 15:53:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.395 15:53:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.395 15:53:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid=a185c444-aaeb-4d13-aa60-df1b0266600d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:32.395 15:53:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:32.395 15:53:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:32.395 15:53:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:32.395 15:53:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:32.395 15:53:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:34.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.925 [2024-07-15 15:53:28.295275] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid=a185c444-aaeb-4d13-aa60-df1b0266600d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:34.925 15:53:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:36.821 15:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:36.821 15:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:36.821 15:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:36.821 15:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:36.821 15:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:36.821 15:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:36.821 15:53:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:36.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:36.821 15:53:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:36.821 15:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:36.822 15:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:36.822 15:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:37.079 15:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:37.079 15:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:37.079 15:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:37.079 15:53:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:37.079 15:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.079 15:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.079 15:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.079 15:53:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:37.079 15:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.079 15:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.079 15:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.079 15:53:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:37.079 15:53:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:37.079 15:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.079 15:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.079 15:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.079 15:53:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:37.079 15:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.079 15:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.079 [2024-07-15 15:53:30.598382] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:37.079 15:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.079 15:53:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:37.079 15:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.079 15:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.079 15:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.079 15:53:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:37.079 15:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.079 15:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.079 15:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.079 15:53:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid=a185c444-aaeb-4d13-aa60-df1b0266600d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:37.079 15:53:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:37.079 15:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:37.079 15:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:37.079 15:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:37.079 15:53:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:39.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.607 [2024-07-15 15:53:32.905437] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.607 15:53:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:39.608 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.608 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.608 [2024-07-15 15:53:32.957472] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:39.608 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.608 15:53:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:39.608 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.608 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.608 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.608 15:53:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:39.608 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.608 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.608 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.608 15:53:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.608 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.608 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.608 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.608 15:53:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:39.608 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.608 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.608 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.608 15:53:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:39.608 15:53:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:39.608 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.608 15:53:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.608 [2024-07-15 15:53:33.009540] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.608 [2024-07-15 15:53:33.057615] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.608 [2024-07-15 15:53:33.105630] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:07:39.608 "poll_groups": [ 00:07:39.608 { 00:07:39.608 "admin_qpairs": 2, 00:07:39.608 "completed_nvme_io": 70, 00:07:39.608 "current_admin_qpairs": 0, 00:07:39.608 "current_io_qpairs": 0, 00:07:39.608 "io_qpairs": 16, 00:07:39.608 "name": "nvmf_tgt_poll_group_000", 00:07:39.608 "pending_bdev_io": 0, 00:07:39.608 "transports": [ 00:07:39.608 { 00:07:39.608 "trtype": "TCP" 00:07:39.608 } 00:07:39.608 ] 00:07:39.608 }, 00:07:39.608 { 00:07:39.608 "admin_qpairs": 3, 00:07:39.608 "completed_nvme_io": 117, 00:07:39.608 "current_admin_qpairs": 0, 00:07:39.608 "current_io_qpairs": 0, 00:07:39.608 "io_qpairs": 17, 00:07:39.608 "name": "nvmf_tgt_poll_group_001", 00:07:39.608 "pending_bdev_io": 0, 00:07:39.608 "transports": [ 00:07:39.608 { 00:07:39.608 "trtype": "TCP" 00:07:39.608 } 00:07:39.608 ] 00:07:39.608 }, 00:07:39.608 { 00:07:39.608 "admin_qpairs": 1, 00:07:39.608 "completed_nvme_io": 166, 00:07:39.608 "current_admin_qpairs": 0, 00:07:39.608 "current_io_qpairs": 0, 00:07:39.608 "io_qpairs": 19, 00:07:39.608 "name": "nvmf_tgt_poll_group_002", 00:07:39.608 "pending_bdev_io": 0, 00:07:39.608 "transports": [ 00:07:39.608 { 00:07:39.608 "trtype": "TCP" 00:07:39.608 } 00:07:39.608 ] 00:07:39.608 }, 00:07:39.608 { 00:07:39.608 "admin_qpairs": 1, 00:07:39.608 "completed_nvme_io": 67, 00:07:39.608 "current_admin_qpairs": 0, 00:07:39.608 "current_io_qpairs": 0, 00:07:39.608 "io_qpairs": 18, 00:07:39.608 "name": "nvmf_tgt_poll_group_003", 00:07:39.608 "pending_bdev_io": 0, 00:07:39.608 "transports": [ 00:07:39.608 { 00:07:39.608 "trtype": "TCP" 00:07:39.608 } 00:07:39.608 ] 00:07:39.608 } 00:07:39.608 ], 00:07:39.608 "tick_rate": 2200000000 00:07:39.608 }' 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:39.608 15:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:39.609 15:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:39.609 15:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:07:39.609 15:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:07:39.609 15:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:39.609 15:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:39.609 15:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:39.609 15:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:07:39.609 15:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:07:39.609 15:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:07:39.609 15:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:07:39.609 15:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:39.609 15:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:07:39.609 15:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:39.609 15:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:07:39.609 15:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:39.609 15:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:39.609 rmmod nvme_tcp 00:07:39.609 rmmod nvme_fabrics 00:07:39.609 rmmod nvme_keyring 00:07:39.609 15:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:39.867 15:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:07:39.867 15:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:07:39.867 15:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 67485 ']' 00:07:39.867 15:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 67485 00:07:39.867 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 67485 ']' 00:07:39.867 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 67485 00:07:39.867 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:07:39.867 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:39.867 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67485 00:07:39.867 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:39.867 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:39.867 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67485' 00:07:39.867 killing process with pid 67485 00:07:39.867 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 67485 00:07:39.867 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 67485 00:07:40.125 15:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:40.125 15:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:40.125 15:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:40.125 15:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:40.125 15:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:40.125 15:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.125 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:40.125 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.125 15:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:40.125 00:07:40.125 real 0m19.253s 00:07:40.125 user 1m12.267s 00:07:40.125 sys 0m2.660s 00:07:40.125 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.125 15:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.125 ************************************ 00:07:40.125 END TEST nvmf_rpc 00:07:40.125 ************************************ 00:07:40.125 15:53:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:40.125 15:53:33 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:40.125 15:53:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:40.125 15:53:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.125 15:53:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:40.125 ************************************ 00:07:40.125 START TEST nvmf_invalid 00:07:40.125 ************************************ 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:40.125 * Looking for test storage... 00:07:40.125 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:40.125 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.126 15:53:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:40.126 15:53:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.383 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:40.383 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:40.383 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:40.383 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:40.383 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:40.383 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:40.383 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:40.383 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:40.383 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:40.383 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:40.383 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:40.383 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:40.383 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:40.383 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:40.383 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:40.383 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:40.383 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:40.383 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:40.383 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:40.383 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:40.383 Cannot find device "nvmf_tgt_br" 00:07:40.383 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # true 00:07:40.383 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:40.383 Cannot find device "nvmf_tgt_br2" 00:07:40.383 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # true 00:07:40.383 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:40.383 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:40.383 Cannot find device "nvmf_tgt_br" 00:07:40.383 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # true 00:07:40.383 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:40.383 Cannot find device "nvmf_tgt_br2" 00:07:40.383 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # true 00:07:40.383 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:40.383 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:40.383 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:40.383 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:40.383 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:07:40.383 15:53:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:40.383 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:40.383 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:07:40.383 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:40.383 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:40.383 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:40.383 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:40.383 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:40.383 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:40.383 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:40.383 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:40.383 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:40.383 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:40.383 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:40.383 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:40.383 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:40.383 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:40.383 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:40.383 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:40.383 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:40.383 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:40.383 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:40.641 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:40.641 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:40.641 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:40.641 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:40.641 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:40.641 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:40.641 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:07:40.641 00:07:40.641 --- 10.0.0.2 ping statistics --- 00:07:40.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.641 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:07:40.641 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:40.641 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:40.641 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:07:40.641 00:07:40.641 --- 10.0.0.3 ping statistics --- 00:07:40.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.641 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:07:40.641 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:40.641 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:40.641 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:07:40.641 00:07:40.641 --- 10.0.0.1 ping statistics --- 00:07:40.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.641 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:07:40.642 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:40.642 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@433 -- # return 0 00:07:40.642 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:40.642 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:40.642 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:40.642 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:40.642 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:40.642 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:40.642 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:40.642 15:53:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:07:40.642 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:40.642 15:53:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:40.642 15:53:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:40.642 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=68002 00:07:40.642 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:40.642 15:53:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 68002 00:07:40.642 15:53:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 68002 ']' 00:07:40.642 15:53:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.642 15:53:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:40.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.642 15:53:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.642 15:53:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:40.642 15:53:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:40.642 [2024-07-15 15:53:34.246016] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:07:40.642 [2024-07-15 15:53:34.246129] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:40.900 [2024-07-15 15:53:34.378697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:40.900 [2024-07-15 15:53:34.511304] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:40.900 [2024-07-15 15:53:34.511377] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:40.900 [2024-07-15 15:53:34.511391] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:40.900 [2024-07-15 15:53:34.511401] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:40.900 [2024-07-15 15:53:34.511409] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:40.900 [2024-07-15 15:53:34.511517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.900 [2024-07-15 15:53:34.511666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.900 [2024-07-15 15:53:34.511593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:40.900 [2024-07-15 15:53:34.511656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:41.833 15:53:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:41.834 15:53:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:07:41.834 15:53:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:41.834 15:53:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:41.834 15:53:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:41.834 15:53:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:41.834 15:53:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:41.834 15:53:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode8307 00:07:41.834 [2024-07-15 15:53:35.519200] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:07:41.834 15:53:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/07/15 15:53:35 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode8307 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:07:41.834 request: 00:07:41.834 { 00:07:41.834 "method": "nvmf_create_subsystem", 00:07:41.834 "params": { 00:07:41.834 "nqn": "nqn.2016-06.io.spdk:cnode8307", 00:07:41.834 "tgt_name": "foobar" 00:07:41.834 } 00:07:41.834 } 00:07:41.834 Got JSON-RPC error response 00:07:41.834 GoRPCClient: error on JSON-RPC call' 00:07:41.834 15:53:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/07/15 15:53:35 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode8307 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:07:41.834 request: 00:07:41.834 { 00:07:41.834 "method": "nvmf_create_subsystem", 00:07:41.834 "params": { 00:07:41.834 "nqn": "nqn.2016-06.io.spdk:cnode8307", 00:07:41.834 "tgt_name": "foobar" 00:07:41.834 } 00:07:41.834 } 00:07:41.834 Got JSON-RPC error response 00:07:41.834 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:07:41.834 15:53:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:07:41.834 15:53:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode14834 00:07:42.116 [2024-07-15 15:53:35.811483] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14834: invalid serial number 'SPDKISFASTANDAWESOME' 00:07:42.116 15:53:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/07/15 15:53:35 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode14834 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:07:42.116 request: 00:07:42.116 { 00:07:42.116 "method": "nvmf_create_subsystem", 00:07:42.116 "params": { 00:07:42.116 "nqn": "nqn.2016-06.io.spdk:cnode14834", 00:07:42.116 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:07:42.116 } 00:07:42.116 } 00:07:42.116 Got JSON-RPC error response 00:07:42.116 GoRPCClient: error on JSON-RPC call' 00:07:42.116 15:53:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/07/15 15:53:35 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode14834 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:07:42.116 request: 00:07:42.116 { 00:07:42.116 "method": "nvmf_create_subsystem", 00:07:42.116 "params": { 00:07:42.116 "nqn": "nqn.2016-06.io.spdk:cnode14834", 00:07:42.116 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:07:42.116 } 00:07:42.116 } 00:07:42.116 Got JSON-RPC error response 00:07:42.116 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:42.116 15:53:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:07:42.116 15:53:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode29874 00:07:42.683 [2024-07-15 15:53:36.183799] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29874: invalid model number 'SPDK_Controller' 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/07/15 15:53:36 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode29874], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:07:42.683 request: 00:07:42.683 { 00:07:42.683 "method": "nvmf_create_subsystem", 00:07:42.683 "params": { 00:07:42.683 "nqn": "nqn.2016-06.io.spdk:cnode29874", 00:07:42.683 "model_number": "SPDK_Controller\u001f" 00:07:42.683 } 00:07:42.683 } 00:07:42.683 Got JSON-RPC error response 00:07:42.683 GoRPCClient: error on JSON-RPC call' 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/07/15 15:53:36 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode29874], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:07:42.683 request: 00:07:42.683 { 00:07:42.683 "method": "nvmf_create_subsystem", 00:07:42.683 "params": { 00:07:42.683 "nqn": "nqn.2016-06.io.spdk:cnode29874", 00:07:42.683 "model_number": "SPDK_Controller\u001f" 00:07:42.683 } 00:07:42.683 } 00:07:42.683 Got JSON-RPC error response 00:07:42.683 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ 3 == \- ]] 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '3&]PT`Rw}VxSeIHLbXgP9' 00:07:42.683 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s '3&]PT`Rw}VxSeIHLbXgP9' nqn.2016-06.io.spdk:cnode23263 00:07:42.942 [2024-07-15 15:53:36.540139] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23263: invalid serial number '3&]PT`Rw}VxSeIHLbXgP9' 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/07/15 15:53:36 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode23263 serial_number:3&]PT`Rw}VxSeIHLbXgP9], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN 3&]PT`Rw}VxSeIHLbXgP9 00:07:42.942 request: 00:07:42.942 { 00:07:42.942 "method": "nvmf_create_subsystem", 00:07:42.942 "params": { 00:07:42.942 "nqn": "nqn.2016-06.io.spdk:cnode23263", 00:07:42.942 "serial_number": "3&]PT`Rw}VxSeIHLbXgP9" 00:07:42.942 } 00:07:42.942 } 00:07:42.942 Got JSON-RPC error response 00:07:42.942 GoRPCClient: error on JSON-RPC call' 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/07/15 15:53:36 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode23263 serial_number:3&]PT`Rw}VxSeIHLbXgP9], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN 3&]PT`Rw}VxSeIHLbXgP9 00:07:42.942 request: 00:07:42.942 { 00:07:42.942 "method": "nvmf_create_subsystem", 00:07:42.942 "params": { 00:07:42.942 "nqn": "nqn.2016-06.io.spdk:cnode23263", 00:07:42.942 "serial_number": "3&]PT`Rw}VxSeIHLbXgP9" 00:07:42.942 } 00:07:42.942 } 00:07:42.942 Got JSON-RPC error response 00:07:42.942 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:07:42.942 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:42.943 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ g == \- ]] 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'gqN#Q%|#3P"C,y!(u)BC?~5:47\f_3MuXD9`1`!UI' 00:07:43.202 15:53:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'gqN#Q%|#3P"C,y!(u)BC?~5:47\f_3MuXD9`1`!UI' nqn.2016-06.io.spdk:cnode1069 00:07:43.460 [2024-07-15 15:53:37.040615] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1069: invalid model number 'gqN#Q%|#3P"C,y!(u)BC?~5:47\f_3MuXD9`1`!UI' 00:07:43.460 15:53:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/07/15 15:53:37 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:gqN#Q%|#3P"C,y!(u)BC?~5:47\f_3MuXD9`1`!UI nqn:nqn.2016-06.io.spdk:cnode1069], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN gqN#Q%|#3P"C,y!(u)BC?~5:47\f_3MuXD9`1`!UI 00:07:43.460 request: 00:07:43.460 { 00:07:43.460 "method": "nvmf_create_subsystem", 00:07:43.460 "params": { 00:07:43.460 "nqn": "nqn.2016-06.io.spdk:cnode1069", 00:07:43.460 "model_number": "gqN#Q%|#3P\"C,y!(u)BC?~5:47\\f_3MuXD9`1`!UI" 00:07:43.460 } 00:07:43.460 } 00:07:43.460 Got JSON-RPC error response 00:07:43.460 GoRPCClient: error on JSON-RPC call' 00:07:43.460 15:53:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/07/15 15:53:37 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:gqN#Q%|#3P"C,y!(u)BC?~5:47\f_3MuXD9`1`!UI nqn:nqn.2016-06.io.spdk:cnode1069], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN gqN#Q%|#3P"C,y!(u)BC?~5:47\f_3MuXD9`1`!UI 00:07:43.460 request: 00:07:43.460 { 00:07:43.460 "method": "nvmf_create_subsystem", 00:07:43.460 "params": { 00:07:43.460 "nqn": "nqn.2016-06.io.spdk:cnode1069", 00:07:43.460 "model_number": "gqN#Q%|#3P\"C,y!(u)BC?~5:47\\f_3MuXD9`1`!UI" 00:07:43.460 } 00:07:43.460 } 00:07:43.460 Got JSON-RPC error response 00:07:43.460 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:43.460 15:53:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:07:43.718 [2024-07-15 15:53:37.336937] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:43.718 15:53:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:07:43.976 15:53:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:07:43.976 15:53:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:07:43.976 15:53:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:07:43.976 15:53:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:07:43.976 15:53:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:07:44.541 [2024-07-15 15:53:37.971917] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:07:44.541 15:53:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/07/15 15:53:37 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:07:44.541 request: 00:07:44.541 { 00:07:44.541 "method": "nvmf_subsystem_remove_listener", 00:07:44.541 "params": { 00:07:44.541 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:44.541 "listen_address": { 00:07:44.541 "trtype": "tcp", 00:07:44.541 "traddr": "", 00:07:44.541 "trsvcid": "4421" 00:07:44.541 } 00:07:44.541 } 00:07:44.541 } 00:07:44.541 Got JSON-RPC error response 00:07:44.541 GoRPCClient: error on JSON-RPC call' 00:07:44.541 15:53:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/07/15 15:53:37 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:07:44.541 request: 00:07:44.541 { 00:07:44.541 "method": "nvmf_subsystem_remove_listener", 00:07:44.541 "params": { 00:07:44.541 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:44.541 "listen_address": { 00:07:44.541 "trtype": "tcp", 00:07:44.541 "traddr": "", 00:07:44.541 "trsvcid": "4421" 00:07:44.541 } 00:07:44.541 } 00:07:44.541 } 00:07:44.541 Got JSON-RPC error response 00:07:44.541 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:07:44.541 15:53:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18673 -i 0 00:07:44.799 [2024-07-15 15:53:38.288147] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18673: invalid cntlid range [0-65519] 00:07:44.799 15:53:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/07/15 15:53:38 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode18673], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:07:44.799 request: 00:07:44.799 { 00:07:44.799 "method": "nvmf_create_subsystem", 00:07:44.799 "params": { 00:07:44.799 "nqn": "nqn.2016-06.io.spdk:cnode18673", 00:07:44.799 "min_cntlid": 0 00:07:44.799 } 00:07:44.799 } 00:07:44.799 Got JSON-RPC error response 00:07:44.799 GoRPCClient: error on JSON-RPC call' 00:07:44.799 15:53:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/07/15 15:53:38 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode18673], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:07:44.799 request: 00:07:44.799 { 00:07:44.799 "method": "nvmf_create_subsystem", 00:07:44.799 "params": { 00:07:44.799 "nqn": "nqn.2016-06.io.spdk:cnode18673", 00:07:44.799 "min_cntlid": 0 00:07:44.799 } 00:07:44.799 } 00:07:44.799 Got JSON-RPC error response 00:07:44.799 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:44.799 15:53:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29183 -i 65520 00:07:45.058 [2024-07-15 15:53:38.580405] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29183: invalid cntlid range [65520-65519] 00:07:45.058 15:53:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/07/15 15:53:38 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode29183], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:07:45.058 request: 00:07:45.058 { 00:07:45.058 "method": "nvmf_create_subsystem", 00:07:45.058 "params": { 00:07:45.058 "nqn": "nqn.2016-06.io.spdk:cnode29183", 00:07:45.058 "min_cntlid": 65520 00:07:45.058 } 00:07:45.058 } 00:07:45.058 Got JSON-RPC error response 00:07:45.058 GoRPCClient: error on JSON-RPC call' 00:07:45.058 15:53:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/07/15 15:53:38 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode29183], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:07:45.058 request: 00:07:45.058 { 00:07:45.058 "method": "nvmf_create_subsystem", 00:07:45.058 "params": { 00:07:45.058 "nqn": "nqn.2016-06.io.spdk:cnode29183", 00:07:45.058 "min_cntlid": 65520 00:07:45.058 } 00:07:45.058 } 00:07:45.058 Got JSON-RPC error response 00:07:45.058 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:45.058 15:53:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12014 -I 0 00:07:45.316 [2024-07-15 15:53:38.888663] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12014: invalid cntlid range [1-0] 00:07:45.316 15:53:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/07/15 15:53:38 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode12014], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:07:45.316 request: 00:07:45.316 { 00:07:45.316 "method": "nvmf_create_subsystem", 00:07:45.316 "params": { 00:07:45.316 "nqn": "nqn.2016-06.io.spdk:cnode12014", 00:07:45.316 "max_cntlid": 0 00:07:45.316 } 00:07:45.316 } 00:07:45.316 Got JSON-RPC error response 00:07:45.316 GoRPCClient: error on JSON-RPC call' 00:07:45.316 15:53:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/07/15 15:53:38 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode12014], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:07:45.316 request: 00:07:45.316 { 00:07:45.316 "method": "nvmf_create_subsystem", 00:07:45.316 "params": { 00:07:45.316 "nqn": "nqn.2016-06.io.spdk:cnode12014", 00:07:45.316 "max_cntlid": 0 00:07:45.316 } 00:07:45.316 } 00:07:45.316 Got JSON-RPC error response 00:07:45.316 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:45.316 15:53:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9970 -I 65520 00:07:45.574 [2024-07-15 15:53:39.140926] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9970: invalid cntlid range [1-65520] 00:07:45.575 15:53:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/07/15 15:53:39 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode9970], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:07:45.575 request: 00:07:45.575 { 00:07:45.575 "method": "nvmf_create_subsystem", 00:07:45.575 "params": { 00:07:45.575 "nqn": "nqn.2016-06.io.spdk:cnode9970", 00:07:45.575 "max_cntlid": 65520 00:07:45.575 } 00:07:45.575 } 00:07:45.575 Got JSON-RPC error response 00:07:45.575 GoRPCClient: error on JSON-RPC call' 00:07:45.575 15:53:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/07/15 15:53:39 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode9970], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:07:45.575 request: 00:07:45.575 { 00:07:45.575 "method": "nvmf_create_subsystem", 00:07:45.575 "params": { 00:07:45.575 "nqn": "nqn.2016-06.io.spdk:cnode9970", 00:07:45.575 "max_cntlid": 65520 00:07:45.575 } 00:07:45.575 } 00:07:45.575 Got JSON-RPC error response 00:07:45.575 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:45.575 15:53:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28345 -i 6 -I 5 00:07:45.833 [2024-07-15 15:53:39.441140] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28345: invalid cntlid range [6-5] 00:07:45.833 15:53:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/07/15 15:53:39 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode28345], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:07:45.833 request: 00:07:45.833 { 00:07:45.833 "method": "nvmf_create_subsystem", 00:07:45.833 "params": { 00:07:45.833 "nqn": "nqn.2016-06.io.spdk:cnode28345", 00:07:45.833 "min_cntlid": 6, 00:07:45.833 "max_cntlid": 5 00:07:45.833 } 00:07:45.833 } 00:07:45.833 Got JSON-RPC error response 00:07:45.833 GoRPCClient: error on JSON-RPC call' 00:07:45.833 15:53:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/07/15 15:53:39 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode28345], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:07:45.833 request: 00:07:45.833 { 00:07:45.833 "method": "nvmf_create_subsystem", 00:07:45.833 "params": { 00:07:45.833 "nqn": "nqn.2016-06.io.spdk:cnode28345", 00:07:45.833 "min_cntlid": 6, 00:07:45.833 "max_cntlid": 5 00:07:45.833 } 00:07:45.833 } 00:07:45.833 Got JSON-RPC error response 00:07:45.833 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:45.834 15:53:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:07:46.092 15:53:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:07:46.092 { 00:07:46.092 "name": "foobar", 00:07:46.092 "method": "nvmf_delete_target", 00:07:46.092 "req_id": 1 00:07:46.092 } 00:07:46.092 Got JSON-RPC error response 00:07:46.092 response: 00:07:46.092 { 00:07:46.092 "code": -32602, 00:07:46.092 "message": "The specified target doesn'\''t exist, cannot delete it." 00:07:46.092 }' 00:07:46.092 15:53:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:07:46.092 { 00:07:46.092 "name": "foobar", 00:07:46.092 "method": "nvmf_delete_target", 00:07:46.092 "req_id": 1 00:07:46.092 } 00:07:46.092 Got JSON-RPC error response 00:07:46.092 response: 00:07:46.092 { 00:07:46.092 "code": -32602, 00:07:46.092 "message": "The specified target doesn't exist, cannot delete it." 00:07:46.092 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:07:46.092 15:53:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:07:46.092 15:53:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:07:46.092 15:53:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:46.092 15:53:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:07:46.092 15:53:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:46.092 15:53:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:07:46.092 15:53:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:46.092 15:53:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:46.092 rmmod nvme_tcp 00:07:46.092 rmmod nvme_fabrics 00:07:46.092 rmmod nvme_keyring 00:07:46.092 15:53:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:46.092 15:53:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:07:46.092 15:53:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:07:46.092 15:53:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 68002 ']' 00:07:46.092 15:53:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 68002 00:07:46.092 15:53:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 68002 ']' 00:07:46.092 15:53:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 68002 00:07:46.092 15:53:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:07:46.092 15:53:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:46.092 15:53:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68002 00:07:46.092 15:53:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:46.092 15:53:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:46.092 killing process with pid 68002 00:07:46.092 15:53:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68002' 00:07:46.092 15:53:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 68002 00:07:46.092 15:53:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 68002 00:07:46.351 15:53:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:46.351 15:53:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:46.351 15:53:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:46.351 15:53:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:46.351 15:53:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:46.351 15:53:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.351 15:53:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:46.351 15:53:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.351 15:53:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:46.351 00:07:46.351 real 0m6.246s 00:07:46.351 user 0m25.298s 00:07:46.351 sys 0m1.287s 00:07:46.351 15:53:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.351 15:53:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:46.351 ************************************ 00:07:46.351 END TEST nvmf_invalid 00:07:46.351 ************************************ 00:07:46.351 15:53:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:46.351 15:53:40 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:46.351 15:53:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:46.351 15:53:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.351 15:53:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:46.351 ************************************ 00:07:46.351 START TEST nvmf_abort 00:07:46.351 ************************************ 00:07:46.351 15:53:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:46.628 * Looking for test storage... 00:07:46.628 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:46.628 15:53:40 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:46.628 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:46.628 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.628 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.628 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.628 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.628 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:46.628 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:46.629 Cannot find device "nvmf_tgt_br" 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # true 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:46.629 Cannot find device "nvmf_tgt_br2" 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # true 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:46.629 Cannot find device "nvmf_tgt_br" 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # true 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:46.629 Cannot find device "nvmf_tgt_br2" 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # true 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:46.629 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # true 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:46.629 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # true 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:46.629 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:46.924 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:46.924 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:46.924 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:46.924 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:46.924 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:46.924 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:46.925 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:46.925 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:46.925 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:46.925 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:46.925 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:46.925 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:46.925 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:46.925 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:46.925 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:46.925 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:46.925 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:46.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:46.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:07:46.925 00:07:46.925 --- 10.0.0.2 ping statistics --- 00:07:46.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.925 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:07:46.925 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:46.925 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:46.925 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:07:46.925 00:07:46.925 --- 10.0.0.3 ping statistics --- 00:07:46.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.925 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:07:46.925 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:46.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:46.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:07:46.925 00:07:46.925 --- 10.0.0.1 ping statistics --- 00:07:46.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.925 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:07:46.925 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:46.925 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@433 -- # return 0 00:07:46.925 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:46.925 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:46.925 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:46.925 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:46.925 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:46.925 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:46.925 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:46.925 15:53:40 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:46.925 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:46.925 15:53:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:46.925 15:53:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:46.925 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=68516 00:07:46.925 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:46.925 15:53:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 68516 00:07:46.925 15:53:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 68516 ']' 00:07:46.925 15:53:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.925 15:53:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:46.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.925 15:53:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.925 15:53:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:46.925 15:53:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:46.925 [2024-07-15 15:53:40.563230] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:07:46.925 [2024-07-15 15:53:40.563322] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:47.183 [2024-07-15 15:53:40.724176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:47.183 [2024-07-15 15:53:40.866298] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:47.183 [2024-07-15 15:53:40.866356] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:47.183 [2024-07-15 15:53:40.866371] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:47.183 [2024-07-15 15:53:40.866382] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:47.183 [2024-07-15 15:53:40.866391] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:47.183 [2024-07-15 15:53:40.866529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.183 [2024-07-15 15:53:40.867081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.183 [2024-07-15 15:53:40.867139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.118 15:53:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:48.118 15:53:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:07:48.118 15:53:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:48.118 15:53:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:48.118 15:53:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.118 15:53:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:48.118 15:53:41 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:48.118 15:53:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.118 15:53:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.118 [2024-07-15 15:53:41.696757] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.118 15:53:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.118 15:53:41 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:48.118 15:53:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.118 15:53:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.118 Malloc0 00:07:48.118 15:53:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.118 15:53:41 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:48.118 15:53:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.118 15:53:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.118 Delay0 00:07:48.118 15:53:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.118 15:53:41 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:48.118 15:53:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.118 15:53:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.118 15:53:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.118 15:53:41 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:48.118 15:53:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.118 15:53:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.118 15:53:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.118 15:53:41 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:48.118 15:53:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.118 15:53:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.118 [2024-07-15 15:53:41.772743] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.118 15:53:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.118 15:53:41 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:48.118 15:53:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.118 15:53:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.118 15:53:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.118 15:53:41 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:48.376 [2024-07-15 15:53:41.947360] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:50.275 Initializing NVMe Controllers 00:07:50.275 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:50.275 controller IO queue size 128 less than required 00:07:50.275 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:50.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:50.275 Initialization complete. Launching workers. 00:07:50.275 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 34199 00:07:50.275 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34260, failed to submit 62 00:07:50.275 success 34203, unsuccess 57, failed 0 00:07:50.275 15:53:43 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:50.275 15:53:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.275 15:53:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:50.275 15:53:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.275 15:53:43 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:50.275 15:53:43 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:50.275 15:53:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:50.275 15:53:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:50.533 15:53:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:50.533 15:53:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:50.533 15:53:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:50.533 15:53:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:50.533 rmmod nvme_tcp 00:07:50.533 rmmod nvme_fabrics 00:07:50.533 rmmod nvme_keyring 00:07:50.533 15:53:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:50.533 15:53:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:50.533 15:53:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:50.533 15:53:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 68516 ']' 00:07:50.533 15:53:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 68516 00:07:50.533 15:53:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 68516 ']' 00:07:50.533 15:53:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 68516 00:07:50.534 15:53:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:07:50.534 15:53:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:50.534 15:53:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68516 00:07:50.792 killing process with pid 68516 00:07:50.792 15:53:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:50.792 15:53:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:50.792 15:53:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68516' 00:07:50.792 15:53:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 68516 00:07:50.792 15:53:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 68516 00:07:51.050 15:53:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:51.051 15:53:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:51.051 15:53:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:51.051 15:53:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:51.051 15:53:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:51.051 15:53:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.051 15:53:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:51.051 15:53:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.051 15:53:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:51.051 00:07:51.051 real 0m4.543s 00:07:51.051 user 0m12.969s 00:07:51.051 sys 0m1.054s 00:07:51.051 15:53:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.051 15:53:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:51.051 ************************************ 00:07:51.051 END TEST nvmf_abort 00:07:51.051 ************************************ 00:07:51.051 15:53:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:51.051 15:53:44 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:51.051 15:53:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:51.051 15:53:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.051 15:53:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:51.051 ************************************ 00:07:51.051 START TEST nvmf_ns_hotplug_stress 00:07:51.051 ************************************ 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:51.051 * Looking for test storage... 00:07:51.051 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:51.051 Cannot find device "nvmf_tgt_br" 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # true 00:07:51.051 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:51.309 Cannot find device "nvmf_tgt_br2" 00:07:51.309 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # true 00:07:51.309 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:51.309 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:51.309 Cannot find device "nvmf_tgt_br" 00:07:51.309 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # true 00:07:51.309 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:51.309 Cannot find device "nvmf_tgt_br2" 00:07:51.309 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # true 00:07:51.309 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:51.309 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:51.309 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:51.309 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:51.309 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:07:51.309 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:51.309 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:51.309 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:07:51.309 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:51.309 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:51.309 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:51.309 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:51.309 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:51.309 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:51.309 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:51.309 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:51.309 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:51.310 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:51.310 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:51.310 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:51.310 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:51.310 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:51.310 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:51.310 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:51.310 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:51.310 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:51.310 15:53:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:51.310 15:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:51.310 15:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:51.568 15:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:51.568 15:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:51.568 15:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:51.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:51.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:07:51.568 00:07:51.568 --- 10.0.0.2 ping statistics --- 00:07:51.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.568 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:07:51.568 15:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:51.568 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:51.568 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:07:51.568 00:07:51.568 --- 10.0.0.3 ping statistics --- 00:07:51.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.568 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:07:51.568 15:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:51.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:51.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:07:51.568 00:07:51.568 --- 10.0.0.1 ping statistics --- 00:07:51.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.568 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:07:51.568 15:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:51.568 15:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@433 -- # return 0 00:07:51.568 15:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:51.568 15:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:51.568 15:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:51.568 15:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:51.568 15:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:51.568 15:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:51.568 15:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:51.568 15:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:51.568 15:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:51.568 15:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:51.568 15:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:51.568 15:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=68781 00:07:51.568 15:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 68781 00:07:51.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.568 15:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 68781 ']' 00:07:51.568 15:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:51.568 15:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.568 15:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:51.568 15:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.568 15:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:51.568 15:53:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:51.568 [2024-07-15 15:53:45.183762] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:07:51.568 [2024-07-15 15:53:45.183891] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.854 [2024-07-15 15:53:45.330449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:51.854 [2024-07-15 15:53:45.500767] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:51.854 [2024-07-15 15:53:45.501133] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:51.854 [2024-07-15 15:53:45.501300] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:51.854 [2024-07-15 15:53:45.501516] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:51.854 [2024-07-15 15:53:45.501646] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:51.854 [2024-07-15 15:53:45.502009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.854 [2024-07-15 15:53:45.502159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:51.854 [2024-07-15 15:53:45.502165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.805 15:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:52.805 15:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:07:52.805 15:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:52.805 15:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:52.805 15:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:52.805 15:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:52.805 15:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:52.805 15:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:53.063 [2024-07-15 15:53:46.535050] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.063 15:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:53.322 15:53:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:53.580 [2024-07-15 15:53:47.147778] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.580 15:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:53.838 15:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:54.096 Malloc0 00:07:54.096 15:53:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:54.661 Delay0 00:07:54.661 15:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.661 15:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:54.919 NULL1 00:07:54.919 15:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:55.177 15:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=68918 00:07:55.177 15:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:55.177 15:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68918 00:07:55.177 15:53:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.554 Read completed with error (sct=0, sc=11) 00:07:56.554 15:53:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.812 15:53:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:56.812 15:53:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:57.069 true 00:07:57.069 15:53:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68918 00:07:57.069 15:53:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.659 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.917 15:53:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.175 15:53:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:58.175 15:53:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:58.433 true 00:07:58.433 15:53:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68918 00:07:58.434 15:53:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.001 15:53:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.259 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.259 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.259 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.259 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.259 15:53:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:59.259 15:53:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:59.517 true 00:07:59.517 15:53:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68918 00:07:59.517 15:53:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.452 15:53:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.710 15:53:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:00.710 15:53:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:00.968 true 00:08:00.968 15:53:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68918 00:08:00.968 15:53:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.227 15:53:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.485 15:53:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:01.485 15:53:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:01.742 true 00:08:01.742 15:53:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68918 00:08:01.742 15:53:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.312 15:53:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.312 15:53:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:02.312 15:53:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:02.570 true 00:08:02.570 15:53:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68918 00:08:02.570 15:53:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.504 15:53:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.762 15:53:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:03.762 15:53:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:04.018 true 00:08:04.018 15:53:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68918 00:08:04.018 15:53:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.276 15:53:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.532 15:53:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:04.532 15:53:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:04.790 true 00:08:04.790 15:53:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68918 00:08:04.790 15:53:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.047 15:53:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.343 15:53:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:05.343 15:53:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:05.621 true 00:08:05.621 15:53:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68918 00:08:05.621 15:53:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.553 15:54:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.811 15:54:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:06.811 15:54:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:07.069 true 00:08:07.069 15:54:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68918 00:08:07.069 15:54:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.327 15:54:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.585 15:54:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:07.585 15:54:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:07.842 true 00:08:07.842 15:54:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68918 00:08:07.842 15:54:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.101 15:54:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.101 15:54:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:08.101 15:54:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:08.359 true 00:08:08.359 15:54:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68918 00:08:08.359 15:54:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.731 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.731 15:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.731 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.731 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.731 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.731 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.731 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.731 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.731 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.731 15:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:09.731 15:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:09.989 true 00:08:10.247 15:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68918 00:08:10.247 15:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.813 15:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.072 15:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:11.072 15:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:11.330 true 00:08:11.330 15:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68918 00:08:11.330 15:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.588 15:54:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.845 15:54:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:11.845 15:54:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:12.103 true 00:08:12.103 15:54:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68918 00:08:12.103 15:54:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.360 15:54:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.617 15:54:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:12.617 15:54:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:12.874 true 00:08:12.874 15:54:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68918 00:08:12.874 15:54:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.804 15:54:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.804 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.804 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.804 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.061 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.061 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.061 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.061 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.061 15:54:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:14.061 15:54:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:14.330 true 00:08:14.330 15:54:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68918 00:08:14.330 15:54:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.264 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:15.264 15:54:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.521 15:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:15.521 15:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:15.779 true 00:08:15.779 15:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68918 00:08:15.779 15:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.036 15:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.601 15:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:16.601 15:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:16.601 true 00:08:16.859 15:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68918 00:08:16.859 15:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.116 15:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.374 15:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:17.374 15:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:17.631 true 00:08:17.631 15:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68918 00:08:17.631 15:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.889 15:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.147 15:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:18.147 15:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:18.713 true 00:08:18.713 15:54:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68918 00:08:18.713 15:54:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.713 15:54:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.970 15:54:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:18.970 15:54:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:19.242 true 00:08:19.531 15:54:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68918 00:08:19.531 15:54:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.097 15:54:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.355 15:54:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:20.355 15:54:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:20.613 true 00:08:20.872 15:54:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68918 00:08:20.872 15:54:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.131 15:54:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.389 15:54:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:21.389 15:54:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:21.647 true 00:08:21.647 15:54:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68918 00:08:21.647 15:54:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.905 15:54:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.162 15:54:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:22.162 15:54:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:22.419 true 00:08:22.419 15:54:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68918 00:08:22.419 15:54:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.350 15:54:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.350 15:54:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:23.350 15:54:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:23.915 true 00:08:23.915 15:54:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68918 00:08:23.915 15:54:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.172 15:54:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.430 15:54:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:24.430 15:54:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:24.687 true 00:08:24.687 15:54:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68918 00:08:24.687 15:54:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.945 15:54:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.256 15:54:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:25.256 15:54:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:25.514 Initializing NVMe Controllers 00:08:25.514 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:25.514 Controller IO queue size 128, less than required. 00:08:25.514 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:25.514 Controller IO queue size 128, less than required. 00:08:25.514 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:25.514 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:25.514 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:25.514 Initialization complete. Launching workers. 00:08:25.514 ======================================================== 00:08:25.514 Latency(us) 00:08:25.514 Device Information : IOPS MiB/s Average min max 00:08:25.514 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1365.91 0.67 44081.34 3207.18 1033735.49 00:08:25.514 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9199.68 4.49 13912.90 3601.16 584928.33 00:08:25.514 ======================================================== 00:08:25.514 Total : 10565.59 5.16 17813.04 3207.18 1033735.49 00:08:25.514 00:08:25.514 true 00:08:25.514 15:54:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68918 00:08:25.514 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (68918) - No such process 00:08:25.514 15:54:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 68918 00:08:25.514 15:54:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.773 15:54:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:26.030 15:54:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:26.030 15:54:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:26.030 15:54:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:26.030 15:54:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:26.030 15:54:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:26.288 null0 00:08:26.546 15:54:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:26.546 15:54:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:26.546 15:54:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:26.805 null1 00:08:26.805 15:54:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:26.805 15:54:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:26.805 15:54:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:27.063 null2 00:08:27.063 15:54:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:27.063 15:54:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:27.063 15:54:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:27.321 null3 00:08:27.321 15:54:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:27.321 15:54:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:27.321 15:54:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:27.580 null4 00:08:27.580 15:54:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:27.580 15:54:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:27.580 15:54:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:27.838 null5 00:08:27.838 15:54:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:27.838 15:54:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:27.838 15:54:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:28.095 null6 00:08:28.096 15:54:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:28.096 15:54:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:28.096 15:54:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:28.661 null7 00:08:28.661 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:28.661 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:28.661 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:28.661 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:28.661 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:28.661 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:28.661 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:28.661 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:28.661 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:28.661 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:28.661 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.661 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:28.661 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:28.661 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:28.661 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:28.661 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:28.661 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:28.661 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:28.662 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 69935 69937 69938 69940 69942 69943 69946 69948 00:08:28.920 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:28.920 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:28.920 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:28.920 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:28.920 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:28.920 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:28.920 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.178 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:29.178 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.178 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.178 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:29.178 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.178 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.178 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:29.178 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.178 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.178 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:29.178 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.178 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.178 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:29.178 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.178 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.178 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:29.178 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.178 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.178 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:29.435 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.435 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.435 15:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:29.435 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.435 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.435 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:29.435 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:29.435 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:29.435 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:29.435 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:29.693 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:29.693 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:29.693 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.693 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:29.693 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.693 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.693 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:29.693 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.693 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.693 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:29.951 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.951 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.951 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:29.951 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.951 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.951 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:29.951 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.951 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.951 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:29.951 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.951 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.951 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:29.951 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.951 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.951 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:29.951 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.951 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.951 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:29.951 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:30.209 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:30.209 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:30.209 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:30.209 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:30.209 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:30.209 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:30.209 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.209 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.209 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.209 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:30.467 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.467 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.467 15:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:30.467 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.467 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.467 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.467 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:30.467 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.467 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:30.467 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.467 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.467 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:30.467 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.467 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.467 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:30.467 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.467 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.468 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:30.468 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.468 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.468 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:30.725 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:30.725 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:30.725 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:30.725 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:30.725 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:30.725 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.988 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:30.988 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:30.988 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.988 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.988 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:30.988 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.988 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.988 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:30.988 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.988 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.988 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:30.988 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.988 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.988 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:30.988 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.988 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.988 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:31.246 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.246 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.246 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:31.246 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.246 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.246 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:31.246 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:31.246 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.246 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.246 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:31.246 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:31.246 15:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:31.504 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:31.504 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:31.504 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.504 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:31.504 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.504 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.504 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:31.504 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.504 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.504 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:31.504 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:31.504 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.504 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.504 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:31.763 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.763 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.763 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:31.763 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.763 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.763 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:31.763 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.763 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.763 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:31.763 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.763 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.763 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:31.763 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:31.763 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.763 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.763 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:31.763 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:32.021 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:32.021 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:32.021 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:32.021 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:32.021 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.280 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.280 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.280 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:32.280 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.280 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.280 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:32.280 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.280 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.280 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:32.280 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:32.280 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.280 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.280 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:32.280 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.280 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.280 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:32.280 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.280 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.280 15:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:32.538 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:32.538 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.538 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.538 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:32.538 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:32.538 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.538 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.538 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:32.538 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:32.538 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:32.538 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:32.797 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:32.797 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.797 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.797 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:32.797 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.797 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.797 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:32.797 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:32.797 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.797 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.797 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:32.797 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.797 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.797 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:32.797 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.055 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.055 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.055 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:33.055 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:33.055 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.055 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.055 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:33.055 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:33.055 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.055 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.055 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:33.055 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:33.055 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:33.055 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.055 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.055 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:33.313 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:33.314 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.314 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.314 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:33.314 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:33.314 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:33.314 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.314 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.314 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:33.314 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.314 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.314 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:33.314 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.314 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.314 15:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:33.314 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.314 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.314 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:33.572 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.572 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:33.572 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.572 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.572 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:33.572 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.572 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.572 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:33.572 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:33.572 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:33.572 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:33.572 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:33.829 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.829 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.829 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:33.829 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:33.829 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:33.829 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.829 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.829 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:33.829 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.829 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.829 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:34.100 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.100 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.100 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:34.100 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.100 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.100 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.100 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:34.100 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.100 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.100 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:34.100 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.100 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.100 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:34.100 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:34.100 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.100 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.100 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:34.358 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:34.358 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:34.358 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:34.358 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.358 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.358 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:34.358 15:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:34.358 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:34.616 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.616 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.616 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:34.616 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.616 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.616 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.616 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.616 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.616 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.616 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.616 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.616 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.874 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.874 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.874 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.874 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.874 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.874 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.874 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:34.874 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:34.874 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:34.874 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:35.132 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:35.132 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:35.132 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:35.132 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:35.132 rmmod nvme_tcp 00:08:35.132 rmmod nvme_fabrics 00:08:35.132 rmmod nvme_keyring 00:08:35.132 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:35.132 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:35.132 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:35.132 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 68781 ']' 00:08:35.132 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 68781 00:08:35.132 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 68781 ']' 00:08:35.132 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 68781 00:08:35.132 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:08:35.132 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:35.132 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68781 00:08:35.132 killing process with pid 68781 00:08:35.132 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:35.132 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:35.132 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68781' 00:08:35.132 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 68781 00:08:35.132 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 68781 00:08:35.391 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:35.391 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:35.391 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:35.391 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:35.391 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:35.391 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.391 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:35.391 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.391 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:35.391 ************************************ 00:08:35.391 END TEST nvmf_ns_hotplug_stress 00:08:35.391 ************************************ 00:08:35.391 00:08:35.391 real 0m44.354s 00:08:35.391 user 3m36.492s 00:08:35.391 sys 0m14.124s 00:08:35.391 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:35.391 15:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:35.391 15:54:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:35.391 15:54:29 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:35.391 15:54:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:35.391 15:54:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.391 15:54:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:35.391 ************************************ 00:08:35.391 START TEST nvmf_connect_stress 00:08:35.391 ************************************ 00:08:35.391 15:54:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:35.391 * Looking for test storage... 00:08:35.391 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:35.391 15:54:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:35.391 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:08:35.650 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:35.650 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:35.650 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:35.650 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:35.650 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:35.650 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:35.650 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:35.650 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:35.650 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:35.650 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:35.650 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:08:35.650 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:08:35.650 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:35.650 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:35.650 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:35.650 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:35.650 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:35.650 15:54:29 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.650 15:54:29 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.650 15:54:29 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.650 15:54:29 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.650 15:54:29 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.650 15:54:29 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.650 15:54:29 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:08:35.650 15:54:29 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.650 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:08:35.650 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:35.650 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:35.650 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:35.650 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:35.650 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:35.650 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:35.650 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:35.650 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:35.650 15:54:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:08:35.650 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:35.650 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:35.650 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:35.650 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:35.651 Cannot find device "nvmf_tgt_br" 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # true 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:35.651 Cannot find device "nvmf_tgt_br2" 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # true 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:35.651 Cannot find device "nvmf_tgt_br" 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # true 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:35.651 Cannot find device "nvmf_tgt_br2" 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # true 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:35.651 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:35.651 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:35.651 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:35.909 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:35.909 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:35.909 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:35.909 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:35.909 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:35.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:35.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:08:35.909 00:08:35.909 --- 10.0.0.2 ping statistics --- 00:08:35.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.909 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:08:35.909 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:35.909 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:35.909 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:08:35.909 00:08:35.909 --- 10.0.0.3 ping statistics --- 00:08:35.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.909 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:08:35.909 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:35.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:35.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:08:35.909 00:08:35.909 --- 10.0.0.1 ping statistics --- 00:08:35.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.909 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:35.909 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:35.909 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@433 -- # return 0 00:08:35.909 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:35.909 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:35.909 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:35.909 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:35.909 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:35.909 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:35.909 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:35.909 15:54:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:08:35.909 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:35.909 15:54:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:35.909 15:54:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:35.909 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=71274 00:08:35.909 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:35.909 15:54:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 71274 00:08:35.909 15:54:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 71274 ']' 00:08:35.909 15:54:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.909 15:54:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:35.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.909 15:54:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.909 15:54:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:35.909 15:54:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:35.909 [2024-07-15 15:54:29.513847] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:08:35.909 [2024-07-15 15:54:29.513981] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.197 [2024-07-15 15:54:29.650443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:36.197 [2024-07-15 15:54:29.777140] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:36.197 [2024-07-15 15:54:29.777234] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:36.197 [2024-07-15 15:54:29.777246] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:36.197 [2024-07-15 15:54:29.777254] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:36.197 [2024-07-15 15:54:29.777263] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:36.197 [2024-07-15 15:54:29.777410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:36.197 [2024-07-15 15:54:29.777539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:36.197 [2024-07-15 15:54:29.777556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.795 15:54:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:36.795 15:54:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:08:36.795 15:54:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:36.795 15:54:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:36.795 15:54:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:36.795 15:54:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:36.795 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:36.795 15:54:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.795 15:54:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:36.795 [2024-07-15 15:54:30.500364] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.795 15:54:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.795 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:36.795 15:54:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.795 15:54:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:36.795 15:54:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.795 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:36.795 15:54:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.795 15:54:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:36.795 [2024-07-15 15:54:30.518204] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:37.054 NULL1 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=71326 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71326 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.054 15:54:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:37.313 15:54:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.313 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71326 00:08:37.313 15:54:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:37.313 15:54:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.313 15:54:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:37.571 15:54:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.571 15:54:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71326 00:08:37.571 15:54:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:37.571 15:54:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.571 15:54:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:38.137 15:54:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.137 15:54:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71326 00:08:38.137 15:54:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:38.137 15:54:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.137 15:54:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:38.395 15:54:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.395 15:54:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71326 00:08:38.395 15:54:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:38.395 15:54:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.395 15:54:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:38.652 15:54:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.652 15:54:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71326 00:08:38.652 15:54:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:38.652 15:54:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.652 15:54:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:38.911 15:54:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.911 15:54:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71326 00:08:38.911 15:54:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:38.911 15:54:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.911 15:54:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.170 15:54:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.170 15:54:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71326 00:08:39.170 15:54:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:39.170 15:54:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.170 15:54:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.738 15:54:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.738 15:54:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71326 00:08:39.738 15:54:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:39.738 15:54:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.738 15:54:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.995 15:54:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.995 15:54:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71326 00:08:39.995 15:54:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:39.995 15:54:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.995 15:54:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.252 15:54:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.252 15:54:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71326 00:08:40.252 15:54:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:40.252 15:54:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.252 15:54:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.509 15:54:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.509 15:54:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71326 00:08:40.509 15:54:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:40.509 15:54:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.509 15:54:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.801 15:54:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.801 15:54:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71326 00:08:40.801 15:54:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:40.801 15:54:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.801 15:54:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:41.367 15:54:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.367 15:54:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71326 00:08:41.367 15:54:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:41.367 15:54:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.367 15:54:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:41.625 15:54:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.625 15:54:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71326 00:08:41.625 15:54:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:41.625 15:54:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.625 15:54:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:41.884 15:54:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.884 15:54:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71326 00:08:41.884 15:54:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:41.884 15:54:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.884 15:54:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.142 15:54:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.142 15:54:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71326 00:08:42.142 15:54:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:42.142 15:54:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.142 15:54:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.400 15:54:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.400 15:54:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71326 00:08:42.400 15:54:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:42.400 15:54:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.400 15:54:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.966 15:54:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.966 15:54:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71326 00:08:42.966 15:54:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:42.966 15:54:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.966 15:54:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:43.223 15:54:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.223 15:54:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71326 00:08:43.223 15:54:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:43.223 15:54:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.223 15:54:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:43.482 15:54:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.482 15:54:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71326 00:08:43.482 15:54:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:43.482 15:54:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.482 15:54:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:43.740 15:54:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.740 15:54:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71326 00:08:43.740 15:54:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:43.740 15:54:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.740 15:54:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:43.999 15:54:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.999 15:54:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71326 00:08:43.999 15:54:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:43.999 15:54:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.999 15:54:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:44.580 15:54:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.580 15:54:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71326 00:08:44.580 15:54:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:44.580 15:54:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.580 15:54:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:44.837 15:54:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.837 15:54:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71326 00:08:44.837 15:54:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:44.837 15:54:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.837 15:54:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.095 15:54:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.095 15:54:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71326 00:08:45.095 15:54:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:45.095 15:54:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.095 15:54:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.353 15:54:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.353 15:54:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71326 00:08:45.353 15:54:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:45.353 15:54:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.353 15:54:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.610 15:54:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.610 15:54:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71326 00:08:45.610 15:54:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:45.610 15:54:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.610 15:54:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:46.176 15:54:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.176 15:54:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71326 00:08:46.176 15:54:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:46.176 15:54:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.176 15:54:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:46.460 15:54:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.460 15:54:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71326 00:08:46.460 15:54:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:46.460 15:54:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.460 15:54:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:46.717 15:54:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.717 15:54:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71326 00:08:46.717 15:54:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:46.717 15:54:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.717 15:54:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:46.975 15:54:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.975 15:54:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71326 00:08:46.975 15:54:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:46.975 15:54:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.975 15:54:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.232 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:47.232 15:54:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.232 15:54:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71326 00:08:47.232 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (71326) - No such process 00:08:47.232 15:54:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 71326 00:08:47.232 15:54:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:08:47.232 15:54:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:47.232 15:54:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:08:47.232 15:54:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:47.232 15:54:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:08:47.489 15:54:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:47.489 15:54:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:08:47.489 15:54:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:47.489 15:54:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:47.489 rmmod nvme_tcp 00:08:47.489 rmmod nvme_fabrics 00:08:47.489 rmmod nvme_keyring 00:08:47.489 15:54:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:47.489 15:54:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:08:47.489 15:54:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:08:47.489 15:54:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 71274 ']' 00:08:47.489 15:54:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 71274 00:08:47.489 15:54:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 71274 ']' 00:08:47.489 15:54:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 71274 00:08:47.489 15:54:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:08:47.489 15:54:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:47.489 15:54:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71274 00:08:47.489 15:54:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:47.489 killing process with pid 71274 00:08:47.490 15:54:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:47.490 15:54:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71274' 00:08:47.490 15:54:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 71274 00:08:47.490 15:54:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 71274 00:08:47.748 15:54:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:47.748 15:54:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:47.748 15:54:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:47.748 15:54:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:47.748 15:54:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:47.748 15:54:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.748 15:54:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:47.748 15:54:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.748 15:54:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:47.748 00:08:47.748 real 0m12.291s 00:08:47.748 user 0m40.625s 00:08:47.748 sys 0m3.492s 00:08:47.748 15:54:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:47.748 15:54:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.748 ************************************ 00:08:47.748 END TEST nvmf_connect_stress 00:08:47.748 ************************************ 00:08:47.748 15:54:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:47.748 15:54:41 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:47.748 15:54:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:47.748 15:54:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:47.748 15:54:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:47.748 ************************************ 00:08:47.748 START TEST nvmf_fused_ordering 00:08:47.748 ************************************ 00:08:47.748 15:54:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:47.748 * Looking for test storage... 00:08:47.748 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:47.748 15:54:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:47.748 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:08:47.748 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.748 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.748 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.748 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.748 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:47.748 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:47.748 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.748 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:47.748 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.748 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:47.748 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:08:47.748 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:08:47.748 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.748 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:47.748 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:47.748 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:47.748 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:48.006 15:54:41 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.006 15:54:41 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.006 15:54:41 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.006 15:54:41 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.006 15:54:41 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.006 15:54:41 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.006 15:54:41 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:48.007 Cannot find device "nvmf_tgt_br" 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # true 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:48.007 Cannot find device "nvmf_tgt_br2" 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # true 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:48.007 Cannot find device "nvmf_tgt_br" 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # true 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:48.007 Cannot find device "nvmf_tgt_br2" 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # true 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:48.007 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:48.007 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:48.007 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:48.264 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:48.264 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:48.264 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:48.264 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:48.264 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:48.264 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:48.265 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:48.265 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:48.265 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:48.265 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:48.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:48.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:08:48.265 00:08:48.265 --- 10.0.0.2 ping statistics --- 00:08:48.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.265 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:08:48.265 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:48.265 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:48.265 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:08:48.265 00:08:48.265 --- 10.0.0.3 ping statistics --- 00:08:48.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.265 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:08:48.265 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:48.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:48.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:08:48.265 00:08:48.265 --- 10.0.0.1 ping statistics --- 00:08:48.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.265 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:08:48.265 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:48.265 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@433 -- # return 0 00:08:48.265 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:48.265 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:48.265 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:48.265 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:48.265 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:48.265 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:48.265 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:48.265 15:54:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:08:48.265 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:48.265 15:54:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:48.265 15:54:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:48.265 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=71652 00:08:48.265 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:48.265 15:54:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 71652 00:08:48.265 15:54:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 71652 ']' 00:08:48.265 15:54:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.265 15:54:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:48.265 15:54:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.265 15:54:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:48.265 15:54:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:48.265 [2024-07-15 15:54:41.931625] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:08:48.265 [2024-07-15 15:54:41.932034] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.523 [2024-07-15 15:54:42.072830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.523 [2024-07-15 15:54:42.212014] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.523 [2024-07-15 15:54:42.212078] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.523 [2024-07-15 15:54:42.212094] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:48.523 [2024-07-15 15:54:42.212104] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:48.523 [2024-07-15 15:54:42.212113] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.523 [2024-07-15 15:54:42.212152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.456 15:54:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:49.457 15:54:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:08:49.457 15:54:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:49.457 15:54:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:49.457 15:54:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:49.457 15:54:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.457 15:54:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:49.457 15:54:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.457 15:54:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:49.457 [2024-07-15 15:54:42.973365] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:49.457 15:54:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.457 15:54:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:49.457 15:54:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.457 15:54:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:49.457 15:54:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.457 15:54:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:49.457 15:54:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.457 15:54:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:49.457 [2024-07-15 15:54:42.993446] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:49.457 15:54:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.457 15:54:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:49.457 15:54:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.457 15:54:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:49.457 NULL1 00:08:49.457 15:54:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.457 15:54:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:08:49.457 15:54:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.457 15:54:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:49.457 15:54:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.457 15:54:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:49.457 15:54:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.457 15:54:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:49.457 15:54:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.457 15:54:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:49.457 [2024-07-15 15:54:43.047218] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:08:49.457 [2024-07-15 15:54:43.047292] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71702 ] 00:08:50.021 Attached to nqn.2016-06.io.spdk:cnode1 00:08:50.021 Namespace ID: 1 size: 1GB 00:08:50.021 fused_ordering(0) 00:08:50.021 fused_ordering(1) 00:08:50.021 fused_ordering(2) 00:08:50.021 fused_ordering(3) 00:08:50.021 fused_ordering(4) 00:08:50.021 fused_ordering(5) 00:08:50.021 fused_ordering(6) 00:08:50.021 fused_ordering(7) 00:08:50.021 fused_ordering(8) 00:08:50.021 fused_ordering(9) 00:08:50.021 fused_ordering(10) 00:08:50.021 fused_ordering(11) 00:08:50.021 fused_ordering(12) 00:08:50.021 fused_ordering(13) 00:08:50.021 fused_ordering(14) 00:08:50.021 fused_ordering(15) 00:08:50.021 fused_ordering(16) 00:08:50.021 fused_ordering(17) 00:08:50.021 fused_ordering(18) 00:08:50.021 fused_ordering(19) 00:08:50.021 fused_ordering(20) 00:08:50.021 fused_ordering(21) 00:08:50.021 fused_ordering(22) 00:08:50.021 fused_ordering(23) 00:08:50.021 fused_ordering(24) 00:08:50.021 fused_ordering(25) 00:08:50.021 fused_ordering(26) 00:08:50.021 fused_ordering(27) 00:08:50.021 fused_ordering(28) 00:08:50.021 fused_ordering(29) 00:08:50.021 fused_ordering(30) 00:08:50.021 fused_ordering(31) 00:08:50.021 fused_ordering(32) 00:08:50.021 fused_ordering(33) 00:08:50.021 fused_ordering(34) 00:08:50.021 fused_ordering(35) 00:08:50.021 fused_ordering(36) 00:08:50.021 fused_ordering(37) 00:08:50.021 fused_ordering(38) 00:08:50.021 fused_ordering(39) 00:08:50.021 fused_ordering(40) 00:08:50.021 fused_ordering(41) 00:08:50.021 fused_ordering(42) 00:08:50.021 fused_ordering(43) 00:08:50.021 fused_ordering(44) 00:08:50.021 fused_ordering(45) 00:08:50.021 fused_ordering(46) 00:08:50.021 fused_ordering(47) 00:08:50.021 fused_ordering(48) 00:08:50.021 fused_ordering(49) 00:08:50.021 fused_ordering(50) 00:08:50.021 fused_ordering(51) 00:08:50.021 fused_ordering(52) 00:08:50.021 fused_ordering(53) 00:08:50.021 fused_ordering(54) 00:08:50.021 fused_ordering(55) 00:08:50.021 fused_ordering(56) 00:08:50.021 fused_ordering(57) 00:08:50.021 fused_ordering(58) 00:08:50.021 fused_ordering(59) 00:08:50.021 fused_ordering(60) 00:08:50.021 fused_ordering(61) 00:08:50.021 fused_ordering(62) 00:08:50.021 fused_ordering(63) 00:08:50.021 fused_ordering(64) 00:08:50.021 fused_ordering(65) 00:08:50.021 fused_ordering(66) 00:08:50.021 fused_ordering(67) 00:08:50.021 fused_ordering(68) 00:08:50.021 fused_ordering(69) 00:08:50.021 fused_ordering(70) 00:08:50.021 fused_ordering(71) 00:08:50.021 fused_ordering(72) 00:08:50.021 fused_ordering(73) 00:08:50.021 fused_ordering(74) 00:08:50.021 fused_ordering(75) 00:08:50.021 fused_ordering(76) 00:08:50.021 fused_ordering(77) 00:08:50.021 fused_ordering(78) 00:08:50.021 fused_ordering(79) 00:08:50.021 fused_ordering(80) 00:08:50.021 fused_ordering(81) 00:08:50.021 fused_ordering(82) 00:08:50.021 fused_ordering(83) 00:08:50.021 fused_ordering(84) 00:08:50.021 fused_ordering(85) 00:08:50.021 fused_ordering(86) 00:08:50.021 fused_ordering(87) 00:08:50.021 fused_ordering(88) 00:08:50.021 fused_ordering(89) 00:08:50.021 fused_ordering(90) 00:08:50.021 fused_ordering(91) 00:08:50.021 fused_ordering(92) 00:08:50.021 fused_ordering(93) 00:08:50.021 fused_ordering(94) 00:08:50.021 fused_ordering(95) 00:08:50.021 fused_ordering(96) 00:08:50.021 fused_ordering(97) 00:08:50.021 fused_ordering(98) 00:08:50.021 fused_ordering(99) 00:08:50.021 fused_ordering(100) 00:08:50.021 fused_ordering(101) 00:08:50.021 fused_ordering(102) 00:08:50.021 fused_ordering(103) 00:08:50.021 fused_ordering(104) 00:08:50.021 fused_ordering(105) 00:08:50.021 fused_ordering(106) 00:08:50.021 fused_ordering(107) 00:08:50.021 fused_ordering(108) 00:08:50.021 fused_ordering(109) 00:08:50.021 fused_ordering(110) 00:08:50.021 fused_ordering(111) 00:08:50.021 fused_ordering(112) 00:08:50.021 fused_ordering(113) 00:08:50.021 fused_ordering(114) 00:08:50.021 fused_ordering(115) 00:08:50.021 fused_ordering(116) 00:08:50.021 fused_ordering(117) 00:08:50.021 fused_ordering(118) 00:08:50.021 fused_ordering(119) 00:08:50.021 fused_ordering(120) 00:08:50.021 fused_ordering(121) 00:08:50.021 fused_ordering(122) 00:08:50.021 fused_ordering(123) 00:08:50.021 fused_ordering(124) 00:08:50.021 fused_ordering(125) 00:08:50.021 fused_ordering(126) 00:08:50.021 fused_ordering(127) 00:08:50.021 fused_ordering(128) 00:08:50.021 fused_ordering(129) 00:08:50.021 fused_ordering(130) 00:08:50.021 fused_ordering(131) 00:08:50.021 fused_ordering(132) 00:08:50.021 fused_ordering(133) 00:08:50.021 fused_ordering(134) 00:08:50.021 fused_ordering(135) 00:08:50.021 fused_ordering(136) 00:08:50.021 fused_ordering(137) 00:08:50.021 fused_ordering(138) 00:08:50.021 fused_ordering(139) 00:08:50.021 fused_ordering(140) 00:08:50.021 fused_ordering(141) 00:08:50.021 fused_ordering(142) 00:08:50.021 fused_ordering(143) 00:08:50.021 fused_ordering(144) 00:08:50.021 fused_ordering(145) 00:08:50.021 fused_ordering(146) 00:08:50.021 fused_ordering(147) 00:08:50.021 fused_ordering(148) 00:08:50.021 fused_ordering(149) 00:08:50.021 fused_ordering(150) 00:08:50.021 fused_ordering(151) 00:08:50.021 fused_ordering(152) 00:08:50.021 fused_ordering(153) 00:08:50.021 fused_ordering(154) 00:08:50.021 fused_ordering(155) 00:08:50.021 fused_ordering(156) 00:08:50.021 fused_ordering(157) 00:08:50.021 fused_ordering(158) 00:08:50.021 fused_ordering(159) 00:08:50.021 fused_ordering(160) 00:08:50.021 fused_ordering(161) 00:08:50.021 fused_ordering(162) 00:08:50.021 fused_ordering(163) 00:08:50.021 fused_ordering(164) 00:08:50.021 fused_ordering(165) 00:08:50.021 fused_ordering(166) 00:08:50.021 fused_ordering(167) 00:08:50.021 fused_ordering(168) 00:08:50.021 fused_ordering(169) 00:08:50.021 fused_ordering(170) 00:08:50.021 fused_ordering(171) 00:08:50.021 fused_ordering(172) 00:08:50.021 fused_ordering(173) 00:08:50.021 fused_ordering(174) 00:08:50.021 fused_ordering(175) 00:08:50.021 fused_ordering(176) 00:08:50.021 fused_ordering(177) 00:08:50.021 fused_ordering(178) 00:08:50.021 fused_ordering(179) 00:08:50.021 fused_ordering(180) 00:08:50.021 fused_ordering(181) 00:08:50.021 fused_ordering(182) 00:08:50.021 fused_ordering(183) 00:08:50.021 fused_ordering(184) 00:08:50.021 fused_ordering(185) 00:08:50.021 fused_ordering(186) 00:08:50.021 fused_ordering(187) 00:08:50.021 fused_ordering(188) 00:08:50.021 fused_ordering(189) 00:08:50.021 fused_ordering(190) 00:08:50.021 fused_ordering(191) 00:08:50.021 fused_ordering(192) 00:08:50.021 fused_ordering(193) 00:08:50.021 fused_ordering(194) 00:08:50.021 fused_ordering(195) 00:08:50.021 fused_ordering(196) 00:08:50.021 fused_ordering(197) 00:08:50.021 fused_ordering(198) 00:08:50.021 fused_ordering(199) 00:08:50.021 fused_ordering(200) 00:08:50.021 fused_ordering(201) 00:08:50.021 fused_ordering(202) 00:08:50.021 fused_ordering(203) 00:08:50.021 fused_ordering(204) 00:08:50.021 fused_ordering(205) 00:08:50.279 fused_ordering(206) 00:08:50.279 fused_ordering(207) 00:08:50.279 fused_ordering(208) 00:08:50.279 fused_ordering(209) 00:08:50.279 fused_ordering(210) 00:08:50.279 fused_ordering(211) 00:08:50.279 fused_ordering(212) 00:08:50.279 fused_ordering(213) 00:08:50.279 fused_ordering(214) 00:08:50.279 fused_ordering(215) 00:08:50.279 fused_ordering(216) 00:08:50.279 fused_ordering(217) 00:08:50.279 fused_ordering(218) 00:08:50.279 fused_ordering(219) 00:08:50.279 fused_ordering(220) 00:08:50.279 fused_ordering(221) 00:08:50.279 fused_ordering(222) 00:08:50.279 fused_ordering(223) 00:08:50.279 fused_ordering(224) 00:08:50.279 fused_ordering(225) 00:08:50.279 fused_ordering(226) 00:08:50.279 fused_ordering(227) 00:08:50.279 fused_ordering(228) 00:08:50.279 fused_ordering(229) 00:08:50.279 fused_ordering(230) 00:08:50.279 fused_ordering(231) 00:08:50.279 fused_ordering(232) 00:08:50.279 fused_ordering(233) 00:08:50.279 fused_ordering(234) 00:08:50.279 fused_ordering(235) 00:08:50.279 fused_ordering(236) 00:08:50.279 fused_ordering(237) 00:08:50.279 fused_ordering(238) 00:08:50.279 fused_ordering(239) 00:08:50.279 fused_ordering(240) 00:08:50.279 fused_ordering(241) 00:08:50.279 fused_ordering(242) 00:08:50.279 fused_ordering(243) 00:08:50.279 fused_ordering(244) 00:08:50.279 fused_ordering(245) 00:08:50.279 fused_ordering(246) 00:08:50.279 fused_ordering(247) 00:08:50.279 fused_ordering(248) 00:08:50.279 fused_ordering(249) 00:08:50.279 fused_ordering(250) 00:08:50.279 fused_ordering(251) 00:08:50.279 fused_ordering(252) 00:08:50.279 fused_ordering(253) 00:08:50.279 fused_ordering(254) 00:08:50.279 fused_ordering(255) 00:08:50.279 fused_ordering(256) 00:08:50.279 fused_ordering(257) 00:08:50.279 fused_ordering(258) 00:08:50.279 fused_ordering(259) 00:08:50.279 fused_ordering(260) 00:08:50.279 fused_ordering(261) 00:08:50.279 fused_ordering(262) 00:08:50.279 fused_ordering(263) 00:08:50.279 fused_ordering(264) 00:08:50.279 fused_ordering(265) 00:08:50.279 fused_ordering(266) 00:08:50.279 fused_ordering(267) 00:08:50.279 fused_ordering(268) 00:08:50.279 fused_ordering(269) 00:08:50.279 fused_ordering(270) 00:08:50.279 fused_ordering(271) 00:08:50.279 fused_ordering(272) 00:08:50.279 fused_ordering(273) 00:08:50.279 fused_ordering(274) 00:08:50.279 fused_ordering(275) 00:08:50.279 fused_ordering(276) 00:08:50.279 fused_ordering(277) 00:08:50.279 fused_ordering(278) 00:08:50.279 fused_ordering(279) 00:08:50.279 fused_ordering(280) 00:08:50.279 fused_ordering(281) 00:08:50.279 fused_ordering(282) 00:08:50.279 fused_ordering(283) 00:08:50.279 fused_ordering(284) 00:08:50.279 fused_ordering(285) 00:08:50.279 fused_ordering(286) 00:08:50.279 fused_ordering(287) 00:08:50.279 fused_ordering(288) 00:08:50.279 fused_ordering(289) 00:08:50.279 fused_ordering(290) 00:08:50.279 fused_ordering(291) 00:08:50.279 fused_ordering(292) 00:08:50.279 fused_ordering(293) 00:08:50.279 fused_ordering(294) 00:08:50.279 fused_ordering(295) 00:08:50.279 fused_ordering(296) 00:08:50.279 fused_ordering(297) 00:08:50.279 fused_ordering(298) 00:08:50.279 fused_ordering(299) 00:08:50.279 fused_ordering(300) 00:08:50.279 fused_ordering(301) 00:08:50.279 fused_ordering(302) 00:08:50.279 fused_ordering(303) 00:08:50.279 fused_ordering(304) 00:08:50.279 fused_ordering(305) 00:08:50.279 fused_ordering(306) 00:08:50.279 fused_ordering(307) 00:08:50.279 fused_ordering(308) 00:08:50.279 fused_ordering(309) 00:08:50.279 fused_ordering(310) 00:08:50.279 fused_ordering(311) 00:08:50.279 fused_ordering(312) 00:08:50.279 fused_ordering(313) 00:08:50.279 fused_ordering(314) 00:08:50.279 fused_ordering(315) 00:08:50.279 fused_ordering(316) 00:08:50.279 fused_ordering(317) 00:08:50.279 fused_ordering(318) 00:08:50.279 fused_ordering(319) 00:08:50.279 fused_ordering(320) 00:08:50.279 fused_ordering(321) 00:08:50.279 fused_ordering(322) 00:08:50.279 fused_ordering(323) 00:08:50.279 fused_ordering(324) 00:08:50.279 fused_ordering(325) 00:08:50.279 fused_ordering(326) 00:08:50.279 fused_ordering(327) 00:08:50.279 fused_ordering(328) 00:08:50.279 fused_ordering(329) 00:08:50.279 fused_ordering(330) 00:08:50.279 fused_ordering(331) 00:08:50.279 fused_ordering(332) 00:08:50.279 fused_ordering(333) 00:08:50.279 fused_ordering(334) 00:08:50.279 fused_ordering(335) 00:08:50.279 fused_ordering(336) 00:08:50.279 fused_ordering(337) 00:08:50.279 fused_ordering(338) 00:08:50.279 fused_ordering(339) 00:08:50.279 fused_ordering(340) 00:08:50.279 fused_ordering(341) 00:08:50.279 fused_ordering(342) 00:08:50.279 fused_ordering(343) 00:08:50.279 fused_ordering(344) 00:08:50.279 fused_ordering(345) 00:08:50.279 fused_ordering(346) 00:08:50.279 fused_ordering(347) 00:08:50.279 fused_ordering(348) 00:08:50.279 fused_ordering(349) 00:08:50.279 fused_ordering(350) 00:08:50.279 fused_ordering(351) 00:08:50.279 fused_ordering(352) 00:08:50.279 fused_ordering(353) 00:08:50.279 fused_ordering(354) 00:08:50.279 fused_ordering(355) 00:08:50.279 fused_ordering(356) 00:08:50.279 fused_ordering(357) 00:08:50.279 fused_ordering(358) 00:08:50.279 fused_ordering(359) 00:08:50.279 fused_ordering(360) 00:08:50.279 fused_ordering(361) 00:08:50.279 fused_ordering(362) 00:08:50.279 fused_ordering(363) 00:08:50.279 fused_ordering(364) 00:08:50.279 fused_ordering(365) 00:08:50.279 fused_ordering(366) 00:08:50.279 fused_ordering(367) 00:08:50.279 fused_ordering(368) 00:08:50.279 fused_ordering(369) 00:08:50.279 fused_ordering(370) 00:08:50.279 fused_ordering(371) 00:08:50.279 fused_ordering(372) 00:08:50.279 fused_ordering(373) 00:08:50.279 fused_ordering(374) 00:08:50.279 fused_ordering(375) 00:08:50.279 fused_ordering(376) 00:08:50.279 fused_ordering(377) 00:08:50.279 fused_ordering(378) 00:08:50.279 fused_ordering(379) 00:08:50.279 fused_ordering(380) 00:08:50.279 fused_ordering(381) 00:08:50.279 fused_ordering(382) 00:08:50.279 fused_ordering(383) 00:08:50.279 fused_ordering(384) 00:08:50.279 fused_ordering(385) 00:08:50.279 fused_ordering(386) 00:08:50.279 fused_ordering(387) 00:08:50.279 fused_ordering(388) 00:08:50.279 fused_ordering(389) 00:08:50.279 fused_ordering(390) 00:08:50.279 fused_ordering(391) 00:08:50.279 fused_ordering(392) 00:08:50.279 fused_ordering(393) 00:08:50.279 fused_ordering(394) 00:08:50.279 fused_ordering(395) 00:08:50.279 fused_ordering(396) 00:08:50.279 fused_ordering(397) 00:08:50.279 fused_ordering(398) 00:08:50.279 fused_ordering(399) 00:08:50.279 fused_ordering(400) 00:08:50.279 fused_ordering(401) 00:08:50.279 fused_ordering(402) 00:08:50.279 fused_ordering(403) 00:08:50.279 fused_ordering(404) 00:08:50.279 fused_ordering(405) 00:08:50.279 fused_ordering(406) 00:08:50.279 fused_ordering(407) 00:08:50.280 fused_ordering(408) 00:08:50.280 fused_ordering(409) 00:08:50.280 fused_ordering(410) 00:08:50.537 fused_ordering(411) 00:08:50.537 fused_ordering(412) 00:08:50.537 fused_ordering(413) 00:08:50.537 fused_ordering(414) 00:08:50.537 fused_ordering(415) 00:08:50.537 fused_ordering(416) 00:08:50.537 fused_ordering(417) 00:08:50.537 fused_ordering(418) 00:08:50.537 fused_ordering(419) 00:08:50.537 fused_ordering(420) 00:08:50.537 fused_ordering(421) 00:08:50.537 fused_ordering(422) 00:08:50.537 fused_ordering(423) 00:08:50.537 fused_ordering(424) 00:08:50.537 fused_ordering(425) 00:08:50.537 fused_ordering(426) 00:08:50.537 fused_ordering(427) 00:08:50.537 fused_ordering(428) 00:08:50.537 fused_ordering(429) 00:08:50.537 fused_ordering(430) 00:08:50.537 fused_ordering(431) 00:08:50.537 fused_ordering(432) 00:08:50.537 fused_ordering(433) 00:08:50.537 fused_ordering(434) 00:08:50.537 fused_ordering(435) 00:08:50.537 fused_ordering(436) 00:08:50.537 fused_ordering(437) 00:08:50.537 fused_ordering(438) 00:08:50.537 fused_ordering(439) 00:08:50.537 fused_ordering(440) 00:08:50.537 fused_ordering(441) 00:08:50.537 fused_ordering(442) 00:08:50.537 fused_ordering(443) 00:08:50.537 fused_ordering(444) 00:08:50.537 fused_ordering(445) 00:08:50.537 fused_ordering(446) 00:08:50.537 fused_ordering(447) 00:08:50.537 fused_ordering(448) 00:08:50.537 fused_ordering(449) 00:08:50.537 fused_ordering(450) 00:08:50.537 fused_ordering(451) 00:08:50.537 fused_ordering(452) 00:08:50.537 fused_ordering(453) 00:08:50.537 fused_ordering(454) 00:08:50.537 fused_ordering(455) 00:08:50.537 fused_ordering(456) 00:08:50.537 fused_ordering(457) 00:08:50.537 fused_ordering(458) 00:08:50.537 fused_ordering(459) 00:08:50.537 fused_ordering(460) 00:08:50.537 fused_ordering(461) 00:08:50.537 fused_ordering(462) 00:08:50.537 fused_ordering(463) 00:08:50.537 fused_ordering(464) 00:08:50.537 fused_ordering(465) 00:08:50.537 fused_ordering(466) 00:08:50.537 fused_ordering(467) 00:08:50.537 fused_ordering(468) 00:08:50.537 fused_ordering(469) 00:08:50.537 fused_ordering(470) 00:08:50.537 fused_ordering(471) 00:08:50.537 fused_ordering(472) 00:08:50.537 fused_ordering(473) 00:08:50.537 fused_ordering(474) 00:08:50.537 fused_ordering(475) 00:08:50.537 fused_ordering(476) 00:08:50.537 fused_ordering(477) 00:08:50.537 fused_ordering(478) 00:08:50.537 fused_ordering(479) 00:08:50.537 fused_ordering(480) 00:08:50.537 fused_ordering(481) 00:08:50.537 fused_ordering(482) 00:08:50.537 fused_ordering(483) 00:08:50.537 fused_ordering(484) 00:08:50.537 fused_ordering(485) 00:08:50.537 fused_ordering(486) 00:08:50.537 fused_ordering(487) 00:08:50.537 fused_ordering(488) 00:08:50.537 fused_ordering(489) 00:08:50.537 fused_ordering(490) 00:08:50.537 fused_ordering(491) 00:08:50.537 fused_ordering(492) 00:08:50.537 fused_ordering(493) 00:08:50.537 fused_ordering(494) 00:08:50.537 fused_ordering(495) 00:08:50.537 fused_ordering(496) 00:08:50.537 fused_ordering(497) 00:08:50.537 fused_ordering(498) 00:08:50.537 fused_ordering(499) 00:08:50.537 fused_ordering(500) 00:08:50.537 fused_ordering(501) 00:08:50.537 fused_ordering(502) 00:08:50.537 fused_ordering(503) 00:08:50.537 fused_ordering(504) 00:08:50.537 fused_ordering(505) 00:08:50.537 fused_ordering(506) 00:08:50.537 fused_ordering(507) 00:08:50.537 fused_ordering(508) 00:08:50.537 fused_ordering(509) 00:08:50.537 fused_ordering(510) 00:08:50.537 fused_ordering(511) 00:08:50.537 fused_ordering(512) 00:08:50.537 fused_ordering(513) 00:08:50.537 fused_ordering(514) 00:08:50.537 fused_ordering(515) 00:08:50.537 fused_ordering(516) 00:08:50.537 fused_ordering(517) 00:08:50.537 fused_ordering(518) 00:08:50.537 fused_ordering(519) 00:08:50.537 fused_ordering(520) 00:08:50.537 fused_ordering(521) 00:08:50.537 fused_ordering(522) 00:08:50.537 fused_ordering(523) 00:08:50.537 fused_ordering(524) 00:08:50.537 fused_ordering(525) 00:08:50.538 fused_ordering(526) 00:08:50.538 fused_ordering(527) 00:08:50.538 fused_ordering(528) 00:08:50.538 fused_ordering(529) 00:08:50.538 fused_ordering(530) 00:08:50.538 fused_ordering(531) 00:08:50.538 fused_ordering(532) 00:08:50.538 fused_ordering(533) 00:08:50.538 fused_ordering(534) 00:08:50.538 fused_ordering(535) 00:08:50.538 fused_ordering(536) 00:08:50.538 fused_ordering(537) 00:08:50.538 fused_ordering(538) 00:08:50.538 fused_ordering(539) 00:08:50.538 fused_ordering(540) 00:08:50.538 fused_ordering(541) 00:08:50.538 fused_ordering(542) 00:08:50.538 fused_ordering(543) 00:08:50.538 fused_ordering(544) 00:08:50.538 fused_ordering(545) 00:08:50.538 fused_ordering(546) 00:08:50.538 fused_ordering(547) 00:08:50.538 fused_ordering(548) 00:08:50.538 fused_ordering(549) 00:08:50.538 fused_ordering(550) 00:08:50.538 fused_ordering(551) 00:08:50.538 fused_ordering(552) 00:08:50.538 fused_ordering(553) 00:08:50.538 fused_ordering(554) 00:08:50.538 fused_ordering(555) 00:08:50.538 fused_ordering(556) 00:08:50.538 fused_ordering(557) 00:08:50.538 fused_ordering(558) 00:08:50.538 fused_ordering(559) 00:08:50.538 fused_ordering(560) 00:08:50.538 fused_ordering(561) 00:08:50.538 fused_ordering(562) 00:08:50.538 fused_ordering(563) 00:08:50.538 fused_ordering(564) 00:08:50.538 fused_ordering(565) 00:08:50.538 fused_ordering(566) 00:08:50.538 fused_ordering(567) 00:08:50.538 fused_ordering(568) 00:08:50.538 fused_ordering(569) 00:08:50.538 fused_ordering(570) 00:08:50.538 fused_ordering(571) 00:08:50.538 fused_ordering(572) 00:08:50.538 fused_ordering(573) 00:08:50.538 fused_ordering(574) 00:08:50.538 fused_ordering(575) 00:08:50.538 fused_ordering(576) 00:08:50.538 fused_ordering(577) 00:08:50.538 fused_ordering(578) 00:08:50.538 fused_ordering(579) 00:08:50.538 fused_ordering(580) 00:08:50.538 fused_ordering(581) 00:08:50.538 fused_ordering(582) 00:08:50.538 fused_ordering(583) 00:08:50.538 fused_ordering(584) 00:08:50.538 fused_ordering(585) 00:08:50.538 fused_ordering(586) 00:08:50.538 fused_ordering(587) 00:08:50.538 fused_ordering(588) 00:08:50.538 fused_ordering(589) 00:08:50.538 fused_ordering(590) 00:08:50.538 fused_ordering(591) 00:08:50.538 fused_ordering(592) 00:08:50.538 fused_ordering(593) 00:08:50.538 fused_ordering(594) 00:08:50.538 fused_ordering(595) 00:08:50.538 fused_ordering(596) 00:08:50.538 fused_ordering(597) 00:08:50.538 fused_ordering(598) 00:08:50.538 fused_ordering(599) 00:08:50.538 fused_ordering(600) 00:08:50.538 fused_ordering(601) 00:08:50.538 fused_ordering(602) 00:08:50.538 fused_ordering(603) 00:08:50.538 fused_ordering(604) 00:08:50.538 fused_ordering(605) 00:08:50.538 fused_ordering(606) 00:08:50.538 fused_ordering(607) 00:08:50.538 fused_ordering(608) 00:08:50.538 fused_ordering(609) 00:08:50.538 fused_ordering(610) 00:08:50.538 fused_ordering(611) 00:08:50.538 fused_ordering(612) 00:08:50.538 fused_ordering(613) 00:08:50.538 fused_ordering(614) 00:08:50.538 fused_ordering(615) 00:08:51.103 fused_ordering(616) 00:08:51.103 fused_ordering(617) 00:08:51.103 fused_ordering(618) 00:08:51.103 fused_ordering(619) 00:08:51.103 fused_ordering(620) 00:08:51.103 fused_ordering(621) 00:08:51.103 fused_ordering(622) 00:08:51.103 fused_ordering(623) 00:08:51.103 fused_ordering(624) 00:08:51.103 fused_ordering(625) 00:08:51.103 fused_ordering(626) 00:08:51.103 fused_ordering(627) 00:08:51.103 fused_ordering(628) 00:08:51.103 fused_ordering(629) 00:08:51.103 fused_ordering(630) 00:08:51.103 fused_ordering(631) 00:08:51.103 fused_ordering(632) 00:08:51.103 fused_ordering(633) 00:08:51.103 fused_ordering(634) 00:08:51.103 fused_ordering(635) 00:08:51.103 fused_ordering(636) 00:08:51.103 fused_ordering(637) 00:08:51.103 fused_ordering(638) 00:08:51.103 fused_ordering(639) 00:08:51.103 fused_ordering(640) 00:08:51.103 fused_ordering(641) 00:08:51.103 fused_ordering(642) 00:08:51.103 fused_ordering(643) 00:08:51.103 fused_ordering(644) 00:08:51.103 fused_ordering(645) 00:08:51.103 fused_ordering(646) 00:08:51.103 fused_ordering(647) 00:08:51.103 fused_ordering(648) 00:08:51.103 fused_ordering(649) 00:08:51.103 fused_ordering(650) 00:08:51.103 fused_ordering(651) 00:08:51.103 fused_ordering(652) 00:08:51.103 fused_ordering(653) 00:08:51.103 fused_ordering(654) 00:08:51.103 fused_ordering(655) 00:08:51.103 fused_ordering(656) 00:08:51.103 fused_ordering(657) 00:08:51.103 fused_ordering(658) 00:08:51.103 fused_ordering(659) 00:08:51.103 fused_ordering(660) 00:08:51.103 fused_ordering(661) 00:08:51.103 fused_ordering(662) 00:08:51.103 fused_ordering(663) 00:08:51.103 fused_ordering(664) 00:08:51.103 fused_ordering(665) 00:08:51.103 fused_ordering(666) 00:08:51.103 fused_ordering(667) 00:08:51.103 fused_ordering(668) 00:08:51.103 fused_ordering(669) 00:08:51.103 fused_ordering(670) 00:08:51.103 fused_ordering(671) 00:08:51.103 fused_ordering(672) 00:08:51.103 fused_ordering(673) 00:08:51.103 fused_ordering(674) 00:08:51.103 fused_ordering(675) 00:08:51.103 fused_ordering(676) 00:08:51.103 fused_ordering(677) 00:08:51.103 fused_ordering(678) 00:08:51.103 fused_ordering(679) 00:08:51.103 fused_ordering(680) 00:08:51.103 fused_ordering(681) 00:08:51.103 fused_ordering(682) 00:08:51.103 fused_ordering(683) 00:08:51.103 fused_ordering(684) 00:08:51.103 fused_ordering(685) 00:08:51.103 fused_ordering(686) 00:08:51.103 fused_ordering(687) 00:08:51.103 fused_ordering(688) 00:08:51.103 fused_ordering(689) 00:08:51.103 fused_ordering(690) 00:08:51.103 fused_ordering(691) 00:08:51.103 fused_ordering(692) 00:08:51.103 fused_ordering(693) 00:08:51.103 fused_ordering(694) 00:08:51.103 fused_ordering(695) 00:08:51.103 fused_ordering(696) 00:08:51.103 fused_ordering(697) 00:08:51.103 fused_ordering(698) 00:08:51.103 fused_ordering(699) 00:08:51.103 fused_ordering(700) 00:08:51.103 fused_ordering(701) 00:08:51.103 fused_ordering(702) 00:08:51.103 fused_ordering(703) 00:08:51.103 fused_ordering(704) 00:08:51.103 fused_ordering(705) 00:08:51.103 fused_ordering(706) 00:08:51.103 fused_ordering(707) 00:08:51.103 fused_ordering(708) 00:08:51.103 fused_ordering(709) 00:08:51.103 fused_ordering(710) 00:08:51.103 fused_ordering(711) 00:08:51.103 fused_ordering(712) 00:08:51.103 fused_ordering(713) 00:08:51.103 fused_ordering(714) 00:08:51.103 fused_ordering(715) 00:08:51.103 fused_ordering(716) 00:08:51.103 fused_ordering(717) 00:08:51.103 fused_ordering(718) 00:08:51.103 fused_ordering(719) 00:08:51.103 fused_ordering(720) 00:08:51.103 fused_ordering(721) 00:08:51.103 fused_ordering(722) 00:08:51.103 fused_ordering(723) 00:08:51.103 fused_ordering(724) 00:08:51.103 fused_ordering(725) 00:08:51.103 fused_ordering(726) 00:08:51.103 fused_ordering(727) 00:08:51.103 fused_ordering(728) 00:08:51.103 fused_ordering(729) 00:08:51.103 fused_ordering(730) 00:08:51.103 fused_ordering(731) 00:08:51.103 fused_ordering(732) 00:08:51.103 fused_ordering(733) 00:08:51.103 fused_ordering(734) 00:08:51.103 fused_ordering(735) 00:08:51.103 fused_ordering(736) 00:08:51.103 fused_ordering(737) 00:08:51.103 fused_ordering(738) 00:08:51.103 fused_ordering(739) 00:08:51.103 fused_ordering(740) 00:08:51.103 fused_ordering(741) 00:08:51.103 fused_ordering(742) 00:08:51.103 fused_ordering(743) 00:08:51.103 fused_ordering(744) 00:08:51.103 fused_ordering(745) 00:08:51.103 fused_ordering(746) 00:08:51.104 fused_ordering(747) 00:08:51.104 fused_ordering(748) 00:08:51.104 fused_ordering(749) 00:08:51.104 fused_ordering(750) 00:08:51.104 fused_ordering(751) 00:08:51.104 fused_ordering(752) 00:08:51.104 fused_ordering(753) 00:08:51.104 fused_ordering(754) 00:08:51.104 fused_ordering(755) 00:08:51.104 fused_ordering(756) 00:08:51.104 fused_ordering(757) 00:08:51.104 fused_ordering(758) 00:08:51.104 fused_ordering(759) 00:08:51.104 fused_ordering(760) 00:08:51.104 fused_ordering(761) 00:08:51.104 fused_ordering(762) 00:08:51.104 fused_ordering(763) 00:08:51.104 fused_ordering(764) 00:08:51.104 fused_ordering(765) 00:08:51.104 fused_ordering(766) 00:08:51.104 fused_ordering(767) 00:08:51.104 fused_ordering(768) 00:08:51.104 fused_ordering(769) 00:08:51.104 fused_ordering(770) 00:08:51.104 fused_ordering(771) 00:08:51.104 fused_ordering(772) 00:08:51.104 fused_ordering(773) 00:08:51.104 fused_ordering(774) 00:08:51.104 fused_ordering(775) 00:08:51.104 fused_ordering(776) 00:08:51.104 fused_ordering(777) 00:08:51.104 fused_ordering(778) 00:08:51.104 fused_ordering(779) 00:08:51.104 fused_ordering(780) 00:08:51.104 fused_ordering(781) 00:08:51.104 fused_ordering(782) 00:08:51.104 fused_ordering(783) 00:08:51.104 fused_ordering(784) 00:08:51.104 fused_ordering(785) 00:08:51.104 fused_ordering(786) 00:08:51.104 fused_ordering(787) 00:08:51.104 fused_ordering(788) 00:08:51.104 fused_ordering(789) 00:08:51.104 fused_ordering(790) 00:08:51.104 fused_ordering(791) 00:08:51.104 fused_ordering(792) 00:08:51.104 fused_ordering(793) 00:08:51.104 fused_ordering(794) 00:08:51.104 fused_ordering(795) 00:08:51.104 fused_ordering(796) 00:08:51.104 fused_ordering(797) 00:08:51.104 fused_ordering(798) 00:08:51.104 fused_ordering(799) 00:08:51.104 fused_ordering(800) 00:08:51.104 fused_ordering(801) 00:08:51.104 fused_ordering(802) 00:08:51.104 fused_ordering(803) 00:08:51.104 fused_ordering(804) 00:08:51.104 fused_ordering(805) 00:08:51.104 fused_ordering(806) 00:08:51.104 fused_ordering(807) 00:08:51.104 fused_ordering(808) 00:08:51.104 fused_ordering(809) 00:08:51.104 fused_ordering(810) 00:08:51.104 fused_ordering(811) 00:08:51.104 fused_ordering(812) 00:08:51.104 fused_ordering(813) 00:08:51.104 fused_ordering(814) 00:08:51.104 fused_ordering(815) 00:08:51.104 fused_ordering(816) 00:08:51.104 fused_ordering(817) 00:08:51.104 fused_ordering(818) 00:08:51.104 fused_ordering(819) 00:08:51.104 fused_ordering(820) 00:08:51.670 fused_ordering(821) 00:08:51.670 fused_ordering(822) 00:08:51.670 fused_ordering(823) 00:08:51.670 fused_ordering(824) 00:08:51.670 fused_ordering(825) 00:08:51.670 fused_ordering(826) 00:08:51.670 fused_ordering(827) 00:08:51.670 fused_ordering(828) 00:08:51.670 fused_ordering(829) 00:08:51.670 fused_ordering(830) 00:08:51.670 fused_ordering(831) 00:08:51.670 fused_ordering(832) 00:08:51.670 fused_ordering(833) 00:08:51.670 fused_ordering(834) 00:08:51.670 fused_ordering(835) 00:08:51.670 fused_ordering(836) 00:08:51.670 fused_ordering(837) 00:08:51.670 fused_ordering(838) 00:08:51.670 fused_ordering(839) 00:08:51.670 fused_ordering(840) 00:08:51.670 fused_ordering(841) 00:08:51.670 fused_ordering(842) 00:08:51.670 fused_ordering(843) 00:08:51.670 fused_ordering(844) 00:08:51.670 fused_ordering(845) 00:08:51.670 fused_ordering(846) 00:08:51.670 fused_ordering(847) 00:08:51.670 fused_ordering(848) 00:08:51.670 fused_ordering(849) 00:08:51.670 fused_ordering(850) 00:08:51.670 fused_ordering(851) 00:08:51.670 fused_ordering(852) 00:08:51.670 fused_ordering(853) 00:08:51.670 fused_ordering(854) 00:08:51.670 fused_ordering(855) 00:08:51.670 fused_ordering(856) 00:08:51.670 fused_ordering(857) 00:08:51.670 fused_ordering(858) 00:08:51.670 fused_ordering(859) 00:08:51.670 fused_ordering(860) 00:08:51.670 fused_ordering(861) 00:08:51.670 fused_ordering(862) 00:08:51.670 fused_ordering(863) 00:08:51.670 fused_ordering(864) 00:08:51.670 fused_ordering(865) 00:08:51.670 fused_ordering(866) 00:08:51.670 fused_ordering(867) 00:08:51.670 fused_ordering(868) 00:08:51.670 fused_ordering(869) 00:08:51.670 fused_ordering(870) 00:08:51.670 fused_ordering(871) 00:08:51.670 fused_ordering(872) 00:08:51.670 fused_ordering(873) 00:08:51.670 fused_ordering(874) 00:08:51.670 fused_ordering(875) 00:08:51.670 fused_ordering(876) 00:08:51.670 fused_ordering(877) 00:08:51.670 fused_ordering(878) 00:08:51.670 fused_ordering(879) 00:08:51.670 fused_ordering(880) 00:08:51.670 fused_ordering(881) 00:08:51.670 fused_ordering(882) 00:08:51.670 fused_ordering(883) 00:08:51.670 fused_ordering(884) 00:08:51.670 fused_ordering(885) 00:08:51.670 fused_ordering(886) 00:08:51.670 fused_ordering(887) 00:08:51.670 fused_ordering(888) 00:08:51.670 fused_ordering(889) 00:08:51.670 fused_ordering(890) 00:08:51.670 fused_ordering(891) 00:08:51.670 fused_ordering(892) 00:08:51.670 fused_ordering(893) 00:08:51.670 fused_ordering(894) 00:08:51.670 fused_ordering(895) 00:08:51.670 fused_ordering(896) 00:08:51.670 fused_ordering(897) 00:08:51.670 fused_ordering(898) 00:08:51.670 fused_ordering(899) 00:08:51.670 fused_ordering(900) 00:08:51.670 fused_ordering(901) 00:08:51.670 fused_ordering(902) 00:08:51.670 fused_ordering(903) 00:08:51.670 fused_ordering(904) 00:08:51.670 fused_ordering(905) 00:08:51.670 fused_ordering(906) 00:08:51.670 fused_ordering(907) 00:08:51.670 fused_ordering(908) 00:08:51.670 fused_ordering(909) 00:08:51.670 fused_ordering(910) 00:08:51.670 fused_ordering(911) 00:08:51.670 fused_ordering(912) 00:08:51.670 fused_ordering(913) 00:08:51.670 fused_ordering(914) 00:08:51.670 fused_ordering(915) 00:08:51.670 fused_ordering(916) 00:08:51.670 fused_ordering(917) 00:08:51.670 fused_ordering(918) 00:08:51.670 fused_ordering(919) 00:08:51.670 fused_ordering(920) 00:08:51.670 fused_ordering(921) 00:08:51.670 fused_ordering(922) 00:08:51.670 fused_ordering(923) 00:08:51.670 fused_ordering(924) 00:08:51.670 fused_ordering(925) 00:08:51.670 fused_ordering(926) 00:08:51.670 fused_ordering(927) 00:08:51.670 fused_ordering(928) 00:08:51.670 fused_ordering(929) 00:08:51.670 fused_ordering(930) 00:08:51.670 fused_ordering(931) 00:08:51.670 fused_ordering(932) 00:08:51.670 fused_ordering(933) 00:08:51.670 fused_ordering(934) 00:08:51.670 fused_ordering(935) 00:08:51.670 fused_ordering(936) 00:08:51.670 fused_ordering(937) 00:08:51.670 fused_ordering(938) 00:08:51.670 fused_ordering(939) 00:08:51.670 fused_ordering(940) 00:08:51.670 fused_ordering(941) 00:08:51.670 fused_ordering(942) 00:08:51.670 fused_ordering(943) 00:08:51.670 fused_ordering(944) 00:08:51.670 fused_ordering(945) 00:08:51.670 fused_ordering(946) 00:08:51.670 fused_ordering(947) 00:08:51.670 fused_ordering(948) 00:08:51.670 fused_ordering(949) 00:08:51.670 fused_ordering(950) 00:08:51.670 fused_ordering(951) 00:08:51.670 fused_ordering(952) 00:08:51.670 fused_ordering(953) 00:08:51.670 fused_ordering(954) 00:08:51.670 fused_ordering(955) 00:08:51.670 fused_ordering(956) 00:08:51.670 fused_ordering(957) 00:08:51.670 fused_ordering(958) 00:08:51.670 fused_ordering(959) 00:08:51.670 fused_ordering(960) 00:08:51.670 fused_ordering(961) 00:08:51.670 fused_ordering(962) 00:08:51.670 fused_ordering(963) 00:08:51.670 fused_ordering(964) 00:08:51.670 fused_ordering(965) 00:08:51.670 fused_ordering(966) 00:08:51.670 fused_ordering(967) 00:08:51.670 fused_ordering(968) 00:08:51.670 fused_ordering(969) 00:08:51.670 fused_ordering(970) 00:08:51.670 fused_ordering(971) 00:08:51.670 fused_ordering(972) 00:08:51.670 fused_ordering(973) 00:08:51.670 fused_ordering(974) 00:08:51.670 fused_ordering(975) 00:08:51.670 fused_ordering(976) 00:08:51.670 fused_ordering(977) 00:08:51.670 fused_ordering(978) 00:08:51.670 fused_ordering(979) 00:08:51.670 fused_ordering(980) 00:08:51.670 fused_ordering(981) 00:08:51.670 fused_ordering(982) 00:08:51.670 fused_ordering(983) 00:08:51.670 fused_ordering(984) 00:08:51.670 fused_ordering(985) 00:08:51.670 fused_ordering(986) 00:08:51.670 fused_ordering(987) 00:08:51.670 fused_ordering(988) 00:08:51.670 fused_ordering(989) 00:08:51.670 fused_ordering(990) 00:08:51.670 fused_ordering(991) 00:08:51.670 fused_ordering(992) 00:08:51.670 fused_ordering(993) 00:08:51.670 fused_ordering(994) 00:08:51.670 fused_ordering(995) 00:08:51.670 fused_ordering(996) 00:08:51.670 fused_ordering(997) 00:08:51.670 fused_ordering(998) 00:08:51.670 fused_ordering(999) 00:08:51.670 fused_ordering(1000) 00:08:51.670 fused_ordering(1001) 00:08:51.670 fused_ordering(1002) 00:08:51.670 fused_ordering(1003) 00:08:51.670 fused_ordering(1004) 00:08:51.670 fused_ordering(1005) 00:08:51.670 fused_ordering(1006) 00:08:51.670 fused_ordering(1007) 00:08:51.670 fused_ordering(1008) 00:08:51.670 fused_ordering(1009) 00:08:51.670 fused_ordering(1010) 00:08:51.670 fused_ordering(1011) 00:08:51.670 fused_ordering(1012) 00:08:51.670 fused_ordering(1013) 00:08:51.670 fused_ordering(1014) 00:08:51.670 fused_ordering(1015) 00:08:51.670 fused_ordering(1016) 00:08:51.670 fused_ordering(1017) 00:08:51.670 fused_ordering(1018) 00:08:51.670 fused_ordering(1019) 00:08:51.670 fused_ordering(1020) 00:08:51.670 fused_ordering(1021) 00:08:51.670 fused_ordering(1022) 00:08:51.670 fused_ordering(1023) 00:08:51.670 15:54:45 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:08:51.670 15:54:45 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:08:51.670 15:54:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:51.670 15:54:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:08:51.670 15:54:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:51.670 15:54:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:08:51.670 15:54:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:51.670 15:54:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:51.670 rmmod nvme_tcp 00:08:51.670 rmmod nvme_fabrics 00:08:51.670 rmmod nvme_keyring 00:08:51.670 15:54:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:51.670 15:54:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:08:51.670 15:54:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:08:51.670 15:54:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 71652 ']' 00:08:51.670 15:54:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 71652 00:08:51.670 15:54:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 71652 ']' 00:08:51.670 15:54:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 71652 00:08:51.670 15:54:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:08:51.670 15:54:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:51.670 15:54:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71652 00:08:51.929 killing process with pid 71652 00:08:51.929 15:54:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:51.929 15:54:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:51.929 15:54:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71652' 00:08:51.929 15:54:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 71652 00:08:51.929 15:54:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 71652 00:08:51.929 15:54:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:51.929 15:54:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:51.929 15:54:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:51.929 15:54:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:51.929 15:54:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:51.929 15:54:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.929 15:54:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:51.929 15:54:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.187 15:54:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:52.187 00:08:52.187 real 0m4.292s 00:08:52.187 user 0m5.081s 00:08:52.187 sys 0m1.438s 00:08:52.187 15:54:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:52.187 15:54:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:52.187 ************************************ 00:08:52.187 END TEST nvmf_fused_ordering 00:08:52.187 ************************************ 00:08:52.187 15:54:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:52.187 15:54:45 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:52.187 15:54:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:52.187 15:54:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:52.187 15:54:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:52.187 ************************************ 00:08:52.187 START TEST nvmf_delete_subsystem 00:08:52.187 ************************************ 00:08:52.187 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:52.187 * Looking for test storage... 00:08:52.187 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:52.187 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:52.187 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:52.187 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.187 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.187 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.187 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.187 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.187 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.187 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.187 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.187 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.187 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.187 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:08:52.187 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:08:52.187 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.187 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.187 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:52.187 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.187 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:52.187 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.187 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.187 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:52.188 Cannot find device "nvmf_tgt_br" 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # true 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:52.188 Cannot find device "nvmf_tgt_br2" 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # true 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:52.188 Cannot find device "nvmf_tgt_br" 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # true 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:52.188 Cannot find device "nvmf_tgt_br2" 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # true 00:08:52.188 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:52.464 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:52.464 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:52.464 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:52.464 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:08:52.464 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:52.464 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:52.464 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:08:52.464 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:52.464 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:52.464 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:52.464 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:52.464 15:54:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:52.464 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:52.464 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:52.464 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:52.464 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:52.464 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:52.464 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:52.464 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:52.464 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:52.464 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:52.465 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:52.465 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:52.465 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:52.465 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:52.465 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:52.465 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:52.465 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:52.465 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:52.465 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:52.465 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:52.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:52.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:08:52.465 00:08:52.465 --- 10.0.0.2 ping statistics --- 00:08:52.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.465 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:08:52.465 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:52.465 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:52.465 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:08:52.465 00:08:52.465 --- 10.0.0.3 ping statistics --- 00:08:52.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.465 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:08:52.465 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:52.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:52.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:08:52.465 00:08:52.465 --- 10.0.0.1 ping statistics --- 00:08:52.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.465 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:08:52.465 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:52.465 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@433 -- # return 0 00:08:52.465 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:52.465 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:52.465 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:52.465 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:52.465 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:52.465 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:52.465 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:52.465 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:52.465 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:52.465 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:52.465 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:52.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.465 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=71912 00:08:52.465 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 71912 00:08:52.465 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 71912 ']' 00:08:52.465 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:52.465 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.465 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:52.465 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.465 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:52.465 15:54:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:52.723 [2024-07-15 15:54:46.247877] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:08:52.723 [2024-07-15 15:54:46.248014] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.723 [2024-07-15 15:54:46.390158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:52.981 [2024-07-15 15:54:46.504340] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:52.981 [2024-07-15 15:54:46.504640] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:52.981 [2024-07-15 15:54:46.504747] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:52.981 [2024-07-15 15:54:46.504838] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:52.981 [2024-07-15 15:54:46.504947] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:52.981 [2024-07-15 15:54:46.505218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.981 [2024-07-15 15:54:46.505230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.548 15:54:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:53.548 15:54:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:08:53.548 15:54:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:53.548 15:54:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:53.548 15:54:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.807 15:54:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:53.807 15:54:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:53.807 15:54:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.807 15:54:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.807 [2024-07-15 15:54:47.303541] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:53.807 15:54:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.807 15:54:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:53.807 15:54:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.807 15:54:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.807 15:54:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.807 15:54:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:53.807 15:54:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.807 15:54:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.807 [2024-07-15 15:54:47.320393] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:53.807 15:54:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.807 15:54:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:53.807 15:54:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.807 15:54:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.807 NULL1 00:08:53.807 15:54:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.807 15:54:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:53.807 15:54:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.807 15:54:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.807 Delay0 00:08:53.807 15:54:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.807 15:54:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.807 15:54:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.807 15:54:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.807 15:54:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.807 15:54:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=71963 00:08:53.807 15:54:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:53.807 15:54:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:53.807 [2024-07-15 15:54:47.514925] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:55.709 15:54:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:55.709 15:54:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.709 15:54:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:55.967 Read completed with error (sct=0, sc=8) 00:08:55.967 Read completed with error (sct=0, sc=8) 00:08:55.967 Read completed with error (sct=0, sc=8) 00:08:55.967 Write completed with error (sct=0, sc=8) 00:08:55.967 starting I/O failed: -6 00:08:55.967 Read completed with error (sct=0, sc=8) 00:08:55.967 Read completed with error (sct=0, sc=8) 00:08:55.967 Write completed with error (sct=0, sc=8) 00:08:55.967 Read completed with error (sct=0, sc=8) 00:08:55.967 starting I/O failed: -6 00:08:55.967 Read completed with error (sct=0, sc=8) 00:08:55.967 Read completed with error (sct=0, sc=8) 00:08:55.967 Read completed with error (sct=0, sc=8) 00:08:55.967 Read completed with error (sct=0, sc=8) 00:08:55.967 starting I/O failed: -6 00:08:55.967 Read completed with error (sct=0, sc=8) 00:08:55.967 Write completed with error (sct=0, sc=8) 00:08:55.967 Write completed with error (sct=0, sc=8) 00:08:55.967 Read completed with error (sct=0, sc=8) 00:08:55.967 starting I/O failed: -6 00:08:55.967 Read completed with error (sct=0, sc=8) 00:08:55.967 Read completed with error (sct=0, sc=8) 00:08:55.967 Read completed with error (sct=0, sc=8) 00:08:55.967 Read completed with error (sct=0, sc=8) 00:08:55.967 starting I/O failed: -6 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 starting I/O failed: -6 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 starting I/O failed: -6 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 starting I/O failed: -6 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 starting I/O failed: -6 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 starting I/O failed: -6 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 starting I/O failed: -6 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 [2024-07-15 15:54:49.551003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172a80 is same with the state(5) to be set 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 starting I/O failed: -6 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 starting I/O failed: -6 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 starting I/O failed: -6 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 starting I/O failed: -6 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 starting I/O failed: -6 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 starting I/O failed: -6 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 starting I/O failed: -6 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 starting I/O failed: -6 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 starting I/O failed: -6 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 [2024-07-15 15:54:49.552298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0700000c00 is same with the state(5) to be set 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Read completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:55.968 Write completed with error (sct=0, sc=8) 00:08:56.903 [2024-07-15 15:54:50.528285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114f510 is same with the state(5) to be set 00:08:56.903 Write completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Write completed with error (sct=0, sc=8) 00:08:56.903 Write completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Write completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Write completed with error (sct=0, sc=8) 00:08:56.903 Write completed with error (sct=0, sc=8) 00:08:56.903 Write completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Write completed with error (sct=0, sc=8) 00:08:56.903 Write completed with error (sct=0, sc=8) 00:08:56.903 Write completed with error (sct=0, sc=8) 00:08:56.903 [2024-07-15 15:54:50.555993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11714c0 is same with the state(5) to be set 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Write completed with error (sct=0, sc=8) 00:08:56.903 Write completed with error (sct=0, sc=8) 00:08:56.903 Write completed with error (sct=0, sc=8) 00:08:56.903 Write completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Write completed with error (sct=0, sc=8) 00:08:56.903 Write completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Write completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Write completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 [2024-07-15 15:54:50.556251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114f6f0 is same with the state(5) to be set 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Write completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Write completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Write completed with error (sct=0, sc=8) 00:08:56.903 Write completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Write completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Write completed with error (sct=0, sc=8) 00:08:56.903 Write completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Write completed with error (sct=0, sc=8) 00:08:56.903 Write completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Write completed with error (sct=0, sc=8) 00:08:56.903 Write completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 [2024-07-15 15:54:50.556915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f070000d600 is same with the state(5) to be set 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Write completed with error (sct=0, sc=8) 00:08:56.903 Write completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Write completed with error (sct=0, sc=8) 00:08:56.903 Write completed with error (sct=0, sc=8) 00:08:56.903 Write completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Write completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Read completed with error (sct=0, sc=8) 00:08:56.903 Write completed with error (sct=0, sc=8) 00:08:56.903 Write completed with error (sct=0, sc=8) 00:08:56.903 15:54:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.903 15:54:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:56.903 Write completed with error (sct=0, sc=8) 00:08:56.903 [2024-07-15 15:54:50.557570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f070000cfe0 is same with the state(5) to be set 00:08:56.903 15:54:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 71963 00:08:56.903 15:54:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:56.903 Initializing NVMe Controllers 00:08:56.903 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:56.903 Controller IO queue size 128, less than required. 00:08:56.903 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:56.903 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:56.903 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:56.903 Initialization complete. Launching workers. 00:08:56.903 ======================================================== 00:08:56.903 Latency(us) 00:08:56.903 Device Information : IOPS MiB/s Average min max 00:08:56.903 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 173.26 0.08 888260.01 575.61 1014936.77 00:08:56.903 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.40 0.08 1010171.94 340.58 2007871.05 00:08:56.903 ======================================================== 00:08:56.903 Total : 327.66 0.16 945706.39 340.58 2007871.05 00:08:56.903 00:08:56.903 [2024-07-15 15:54:50.558445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114f510 (9): Bad file descriptor 00:08:56.903 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:57.470 15:54:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:57.470 15:54:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 71963 00:08:57.470 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (71963) - No such process 00:08:57.470 15:54:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 71963 00:08:57.470 15:54:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:08:57.471 15:54:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 71963 00:08:57.471 15:54:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:08:57.471 15:54:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:57.471 15:54:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:08:57.471 15:54:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:57.471 15:54:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 71963 00:08:57.471 15:54:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:08:57.471 15:54:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:57.471 15:54:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:57.471 15:54:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:57.471 15:54:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:57.471 15:54:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.471 15:54:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:57.471 15:54:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.471 15:54:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:57.471 15:54:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.471 15:54:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:57.471 [2024-07-15 15:54:51.080774] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:57.471 15:54:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.471 15:54:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.471 15:54:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.471 15:54:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:57.471 15:54:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.471 15:54:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=72014 00:08:57.471 15:54:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:57.471 15:54:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:57.471 15:54:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 72014 00:08:57.471 15:54:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:57.729 [2024-07-15 15:54:51.258262] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:57.988 15:54:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:57.988 15:54:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 72014 00:08:57.988 15:54:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:58.554 15:54:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:58.554 15:54:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 72014 00:08:58.554 15:54:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:59.119 15:54:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:59.119 15:54:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 72014 00:08:59.119 15:54:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:59.683 15:54:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:59.683 15:54:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 72014 00:08:59.683 15:54:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:59.940 15:54:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:59.940 15:54:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 72014 00:08:59.940 15:54:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:00.505 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:00.505 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 72014 00:09:00.505 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:00.763 Initializing NVMe Controllers 00:09:00.763 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:00.763 Controller IO queue size 128, less than required. 00:09:00.763 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:00.763 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:00.763 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:00.763 Initialization complete. Launching workers. 00:09:00.763 ======================================================== 00:09:00.763 Latency(us) 00:09:00.763 Device Information : IOPS MiB/s Average min max 00:09:00.763 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003199.08 1000122.91 1010286.67 00:09:00.763 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005353.12 1000195.78 1042648.30 00:09:00.763 ======================================================== 00:09:00.763 Total : 256.00 0.12 1004276.10 1000122.91 1042648.30 00:09:00.763 00:09:01.022 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:01.022 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 72014 00:09:01.022 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (72014) - No such process 00:09:01.022 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 72014 00:09:01.022 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:01.022 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:01.022 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:01.022 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:09:01.022 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:01.022 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:09:01.022 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:01.022 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:01.022 rmmod nvme_tcp 00:09:01.022 rmmod nvme_fabrics 00:09:01.022 rmmod nvme_keyring 00:09:01.022 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:01.022 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:09:01.022 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:09:01.022 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 71912 ']' 00:09:01.022 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 71912 00:09:01.022 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 71912 ']' 00:09:01.022 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 71912 00:09:01.022 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:09:01.022 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:01.022 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71912 00:09:01.022 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:01.022 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:01.022 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71912' 00:09:01.022 killing process with pid 71912 00:09:01.022 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 71912 00:09:01.022 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 71912 00:09:01.280 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:01.280 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:01.280 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:01.280 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:01.280 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:01.280 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.280 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:01.280 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.280 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:01.280 00:09:01.280 real 0m9.269s 00:09:01.280 user 0m28.705s 00:09:01.280 sys 0m1.554s 00:09:01.280 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:01.280 15:54:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:01.280 ************************************ 00:09:01.280 END TEST nvmf_delete_subsystem 00:09:01.280 ************************************ 00:09:01.539 15:54:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:01.539 15:54:55 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:09:01.539 15:54:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:01.539 15:54:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.539 15:54:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:01.539 ************************************ 00:09:01.539 START TEST nvmf_ns_masking 00:09:01.539 ************************************ 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:09:01.539 * Looking for test storage... 00:09:01.539 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=4c7c8017-04e3-4ae8-b6c0-840cb8bcee72 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=517bb284-b287-40cb-ac98-d14811821767 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=a01e32d0-b902-4a46-b066-3e0cd8fa94ac 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:01.539 Cannot find device "nvmf_tgt_br" 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # true 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:01.539 Cannot find device "nvmf_tgt_br2" 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # true 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:01.539 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:01.540 Cannot find device "nvmf_tgt_br" 00:09:01.540 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # true 00:09:01.540 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:01.540 Cannot find device "nvmf_tgt_br2" 00:09:01.540 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # true 00:09:01.540 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:01.540 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:01.797 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:01.797 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:01.797 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:09:01.797 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:01.797 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:01.797 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:09:01.797 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:01.797 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:01.797 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:01.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:01.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:09:01.798 00:09:01.798 --- 10.0.0.2 ping statistics --- 00:09:01.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.798 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:01.798 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:01.798 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:09:01.798 00:09:01.798 --- 10.0.0.3 ping statistics --- 00:09:01.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.798 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:01.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:01.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:09:01.798 00:09:01.798 --- 10.0.0.1 ping statistics --- 00:09:01.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.798 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@433 -- # return 0 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=72247 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 72247 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 72247 ']' 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:01.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:01.798 15:54:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:02.056 [2024-07-15 15:54:55.569527] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:09:02.056 [2024-07-15 15:54:55.569611] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.056 [2024-07-15 15:54:55.707370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.315 [2024-07-15 15:54:55.800492] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:02.315 [2024-07-15 15:54:55.800551] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:02.315 [2024-07-15 15:54:55.800566] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:02.315 [2024-07-15 15:54:55.800576] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:02.315 [2024-07-15 15:54:55.800586] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:02.315 [2024-07-15 15:54:55.800622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.881 15:54:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:02.881 15:54:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:02.881 15:54:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:02.881 15:54:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:02.881 15:54:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:03.145 15:54:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:03.146 15:54:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:03.403 [2024-07-15 15:54:56.888809] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:03.403 15:54:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:09:03.403 15:54:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:09:03.403 15:54:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:03.661 Malloc1 00:09:03.661 15:54:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:03.918 Malloc2 00:09:03.918 15:54:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:04.176 15:54:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:09:04.434 15:54:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:04.692 [2024-07-15 15:54:58.233923] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:04.692 15:54:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:09:04.692 15:54:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a01e32d0-b902-4a46-b066-3e0cd8fa94ac -a 10.0.0.2 -s 4420 -i 4 00:09:04.692 15:54:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:09:04.692 15:54:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:04.692 15:54:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:04.692 15:54:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:04.692 15:54:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:07.228 15:55:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:07.228 15:55:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:07.228 15:55:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:07.228 15:55:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:07.228 15:55:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:07.228 15:55:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:07.228 15:55:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:07.228 15:55:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:07.228 15:55:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:07.228 15:55:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:07.228 15:55:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:09:07.228 15:55:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:07.228 15:55:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:07.228 [ 0]:0x1 00:09:07.228 15:55:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:07.228 15:55:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:07.228 15:55:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e4bd042c3f434578b64df76563e565af 00:09:07.228 15:55:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e4bd042c3f434578b64df76563e565af != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:07.228 15:55:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:09:07.228 15:55:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:09:07.228 15:55:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:07.228 15:55:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:07.228 [ 0]:0x1 00:09:07.228 15:55:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:07.228 15:55:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:07.228 15:55:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e4bd042c3f434578b64df76563e565af 00:09:07.228 15:55:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e4bd042c3f434578b64df76563e565af != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:07.228 15:55:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:09:07.228 15:55:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:07.228 15:55:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:07.228 [ 1]:0x2 00:09:07.228 15:55:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:07.228 15:55:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:07.228 15:55:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1b7fce2a77424eec93c284cd42ddd0b9 00:09:07.228 15:55:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1b7fce2a77424eec93c284cd42ddd0b9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:07.228 15:55:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:09:07.228 15:55:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:07.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.228 15:55:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.487 15:55:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:09:08.052 15:55:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:09:08.052 15:55:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a01e32d0-b902-4a46-b066-3e0cd8fa94ac -a 10.0.0.2 -s 4420 -i 4 00:09:08.052 15:55:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:09:08.052 15:55:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:08.052 15:55:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:08.052 15:55:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:09:08.052 15:55:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:09:08.052 15:55:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:09.962 15:55:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:09.962 15:55:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:09.962 15:55:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:09.962 15:55:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:09.962 15:55:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:09.962 15:55:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:09.962 15:55:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:09.962 15:55:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:09.962 15:55:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:09.962 15:55:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:09.962 15:55:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:09:09.962 15:55:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:09.962 15:55:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:09.962 15:55:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:09.962 15:55:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:09.962 15:55:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:09.962 15:55:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:09.962 15:55:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:09.962 15:55:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:09.962 15:55:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:09.962 15:55:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:09.962 15:55:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:10.221 15:55:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:10.221 15:55:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:10.221 15:55:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:10.221 15:55:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:10.221 15:55:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:10.221 15:55:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:10.221 15:55:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:09:10.221 15:55:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:10.221 15:55:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:10.221 [ 0]:0x2 00:09:10.221 15:55:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:10.221 15:55:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:10.221 15:55:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1b7fce2a77424eec93c284cd42ddd0b9 00:09:10.221 15:55:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1b7fce2a77424eec93c284cd42ddd0b9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:10.221 15:55:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:10.480 15:55:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:09:10.480 15:55:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:10.480 15:55:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:10.480 [ 0]:0x1 00:09:10.480 15:55:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:10.480 15:55:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:10.480 15:55:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e4bd042c3f434578b64df76563e565af 00:09:10.480 15:55:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e4bd042c3f434578b64df76563e565af != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:10.480 15:55:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:09:10.480 15:55:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:10.480 15:55:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:10.480 [ 1]:0x2 00:09:10.480 15:55:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:10.480 15:55:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:10.480 15:55:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1b7fce2a77424eec93c284cd42ddd0b9 00:09:10.480 15:55:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1b7fce2a77424eec93c284cd42ddd0b9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:10.480 15:55:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:10.739 15:55:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:09:10.739 15:55:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:10.739 15:55:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:10.739 15:55:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:10.739 15:55:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:10.739 15:55:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:10.739 15:55:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:10.739 15:55:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:10.739 15:55:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:10.739 15:55:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:10.739 15:55:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:10.739 15:55:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:10.998 15:55:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:10.998 15:55:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:10.998 15:55:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:10.998 15:55:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:10.998 15:55:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:10.998 15:55:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:10.998 15:55:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:09:10.998 15:55:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:10.998 15:55:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:10.998 [ 0]:0x2 00:09:10.998 15:55:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:10.998 15:55:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:10.998 15:55:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1b7fce2a77424eec93c284cd42ddd0b9 00:09:10.998 15:55:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1b7fce2a77424eec93c284cd42ddd0b9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:10.998 15:55:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:09:10.998 15:55:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:10.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.998 15:55:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:11.256 15:55:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:09:11.256 15:55:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a01e32d0-b902-4a46-b066-3e0cd8fa94ac -a 10.0.0.2 -s 4420 -i 4 00:09:11.514 15:55:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:11.514 15:55:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:11.515 15:55:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:11.515 15:55:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:09:11.515 15:55:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:09:11.515 15:55:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:13.415 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:13.415 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:13.415 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:13.415 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:09:13.415 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:13.415 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:13.415 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:13.415 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:13.415 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:13.415 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:13.415 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:09:13.415 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:13.415 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:13.415 [ 0]:0x1 00:09:13.415 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:13.415 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:13.673 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e4bd042c3f434578b64df76563e565af 00:09:13.673 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e4bd042c3f434578b64df76563e565af != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:13.673 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:09:13.673 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:13.673 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:13.673 [ 1]:0x2 00:09:13.673 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:13.673 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:13.673 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1b7fce2a77424eec93c284cd42ddd0b9 00:09:13.673 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1b7fce2a77424eec93c284cd42ddd0b9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:13.673 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:13.932 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:09:13.932 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:13.932 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:13.932 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:13.932 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:13.932 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:13.932 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:13.932 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:13.932 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:13.932 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:13.932 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:13.932 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:13.932 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:13.932 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:13.932 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:13.932 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:13.932 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:13.932 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:13.932 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:09:13.932 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:13.932 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:13.932 [ 0]:0x2 00:09:13.932 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:13.932 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:13.932 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1b7fce2a77424eec93c284cd42ddd0b9 00:09:13.932 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1b7fce2a77424eec93c284cd42ddd0b9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:13.932 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:13.932 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:13.932 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:13.932 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:13.932 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:13.932 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:13.932 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:13.932 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:13.932 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:13.932 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:13.932 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:13.932 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:14.190 [2024-07-15 15:55:07.900313] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:09:14.190 2024/07/15 15:55:07 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:09:14.190 request: 00:09:14.190 { 00:09:14.190 "method": "nvmf_ns_remove_host", 00:09:14.190 "params": { 00:09:14.190 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:14.190 "nsid": 2, 00:09:14.190 "host": "nqn.2016-06.io.spdk:host1" 00:09:14.190 } 00:09:14.190 } 00:09:14.190 Got JSON-RPC error response 00:09:14.190 GoRPCClient: error on JSON-RPC call 00:09:14.448 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:14.448 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:14.448 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:14.448 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:14.448 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:09:14.448 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:14.448 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:14.448 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:14.448 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:14.448 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:14.448 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:14.448 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:14.448 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:14.448 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:14.448 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:14.448 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:14.448 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:14.448 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:14.448 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:14.448 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:14.448 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:14.448 15:55:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:14.448 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:09:14.448 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:14.448 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:14.448 [ 0]:0x2 00:09:14.448 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:14.448 15:55:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:14.448 15:55:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1b7fce2a77424eec93c284cd42ddd0b9 00:09:14.448 15:55:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1b7fce2a77424eec93c284cd42ddd0b9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:14.448 15:55:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:09:14.448 15:55:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:14.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.448 15:55:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=72629 00:09:14.448 15:55:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:09:14.448 15:55:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:09:14.448 15:55:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 72629 /var/tmp/host.sock 00:09:14.448 15:55:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 72629 ']' 00:09:14.448 15:55:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:09:14.448 15:55:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:14.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:14.448 15:55:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:14.448 15:55:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:14.448 15:55:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:14.448 [2024-07-15 15:55:08.152081] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:09:14.448 [2024-07-15 15:55:08.152197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72629 ] 00:09:14.707 [2024-07-15 15:55:08.292373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.965 [2024-07-15 15:55:08.452129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.532 15:55:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:15.532 15:55:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:15.532 15:55:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.790 15:55:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:16.048 15:55:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 4c7c8017-04e3-4ae8-b6c0-840cb8bcee72 00:09:16.048 15:55:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:16.048 15:55:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 4C7C801704E34AE8B6C0840CB8BCEE72 -i 00:09:16.306 15:55:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 517bb284-b287-40cb-ac98-d14811821767 00:09:16.306 15:55:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:16.306 15:55:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 517BB284B28740CBAC98D14811821767 -i 00:09:16.564 15:55:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:17.131 15:55:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:09:17.131 15:55:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:17.131 15:55:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:17.696 nvme0n1 00:09:17.696 15:55:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:17.696 15:55:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:17.954 nvme1n2 00:09:17.954 15:55:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:09:17.954 15:55:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:09:17.954 15:55:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:09:17.954 15:55:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:09:17.954 15:55:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:09:18.213 15:55:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:09:18.213 15:55:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:09:18.213 15:55:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:09:18.213 15:55:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:09:18.471 15:55:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 4c7c8017-04e3-4ae8-b6c0-840cb8bcee72 == \4\c\7\c\8\0\1\7\-\0\4\e\3\-\4\a\e\8\-\b\6\c\0\-\8\4\0\c\b\8\b\c\e\e\7\2 ]] 00:09:18.471 15:55:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:09:18.471 15:55:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:09:18.471 15:55:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:09:18.730 15:55:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 517bb284-b287-40cb-ac98-d14811821767 == \5\1\7\b\b\2\8\4\-\b\2\8\7\-\4\0\c\b\-\a\c\9\8\-\d\1\4\8\1\1\8\2\1\7\6\7 ]] 00:09:18.730 15:55:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 72629 00:09:18.730 15:55:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 72629 ']' 00:09:18.730 15:55:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 72629 00:09:18.730 15:55:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:18.730 15:55:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:18.730 15:55:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72629 00:09:18.988 killing process with pid 72629 00:09:18.988 15:55:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:18.988 15:55:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:18.988 15:55:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72629' 00:09:18.988 15:55:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 72629 00:09:18.988 15:55:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 72629 00:09:19.247 15:55:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:19.506 15:55:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:09:19.506 15:55:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:09:19.506 15:55:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:19.506 15:55:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:09:19.506 15:55:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:19.506 15:55:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:09:19.506 15:55:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:19.506 15:55:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:19.506 rmmod nvme_tcp 00:09:19.506 rmmod nvme_fabrics 00:09:19.506 rmmod nvme_keyring 00:09:19.763 15:55:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:19.763 15:55:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:09:19.763 15:55:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:09:19.763 15:55:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 72247 ']' 00:09:19.763 15:55:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 72247 00:09:19.763 15:55:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 72247 ']' 00:09:19.763 15:55:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 72247 00:09:19.764 15:55:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:19.764 15:55:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:19.764 15:55:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72247 00:09:19.764 killing process with pid 72247 00:09:19.764 15:55:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:19.764 15:55:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:19.764 15:55:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72247' 00:09:19.764 15:55:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 72247 00:09:19.764 15:55:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 72247 00:09:20.022 15:55:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:20.022 15:55:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:20.022 15:55:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:20.022 15:55:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:20.022 15:55:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:20.022 15:55:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.022 15:55:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:20.022 15:55:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.022 15:55:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:20.022 ************************************ 00:09:20.022 END TEST nvmf_ns_masking 00:09:20.022 ************************************ 00:09:20.022 00:09:20.022 real 0m18.567s 00:09:20.022 user 0m29.812s 00:09:20.022 sys 0m2.845s 00:09:20.022 15:55:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:20.022 15:55:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:20.022 15:55:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:20.022 15:55:13 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:09:20.022 15:55:13 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:09:20.022 15:55:13 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:20.022 15:55:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:20.022 15:55:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.022 15:55:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:20.022 ************************************ 00:09:20.022 START TEST nvmf_host_management 00:09:20.022 ************************************ 00:09:20.022 15:55:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:20.022 * Looking for test storage... 00:09:20.022 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:20.022 15:55:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:20.022 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:20.022 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:20.022 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:20.022 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:20.022 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:20.022 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:20.022 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:20.022 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:20.022 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:20.022 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:20.022 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:20.022 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:09:20.022 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:09:20.022 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:20.022 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:20.022 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:20.022 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:20.022 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:20.022 15:55:13 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.022 15:55:13 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.022 15:55:13 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.022 15:55:13 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.022 15:55:13 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.022 15:55:13 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.022 15:55:13 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:20.022 15:55:13 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.022 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:09:20.022 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:20.022 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:20.022 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:20.022 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:20.022 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:20.023 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:20.023 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:20.023 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:20.023 15:55:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:20.023 15:55:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:20.023 15:55:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:20.023 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:20.023 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:20.023 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:20.023 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:20.023 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:20.023 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.023 15:55:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:20.023 15:55:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.281 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:20.281 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:20.281 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:20.281 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:20.281 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:20.281 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:20.281 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:20.281 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:20.281 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:20.281 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:20.281 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:20.281 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:20.281 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:20.281 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:20.281 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:20.281 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:20.281 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:20.281 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:20.281 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:20.281 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:20.281 Cannot find device "nvmf_tgt_br" 00:09:20.281 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:09:20.281 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:20.281 Cannot find device "nvmf_tgt_br2" 00:09:20.281 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:09:20.281 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:20.281 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:20.281 Cannot find device "nvmf_tgt_br" 00:09:20.281 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:09:20.281 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:20.281 Cannot find device "nvmf_tgt_br2" 00:09:20.281 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:09:20.281 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:20.281 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:20.282 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:20.282 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:20.282 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:09:20.282 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:20.282 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:20.282 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:09:20.282 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:20.282 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:20.282 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:20.282 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:20.282 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:20.282 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:20.282 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:20.282 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:20.282 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:20.282 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:20.282 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:20.282 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:20.282 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:20.282 15:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:20.282 15:55:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:20.540 15:55:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:20.540 15:55:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:20.540 15:55:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:20.540 15:55:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:20.540 15:55:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:20.540 15:55:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:20.540 15:55:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:20.540 15:55:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:20.540 15:55:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:20.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:20.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:09:20.540 00:09:20.540 --- 10.0.0.2 ping statistics --- 00:09:20.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.540 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:09:20.540 15:55:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:20.540 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:20.540 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:09:20.540 00:09:20.540 --- 10.0.0.3 ping statistics --- 00:09:20.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.540 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:09:20.540 15:55:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:20.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:20.540 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:09:20.540 00:09:20.540 --- 10.0.0.1 ping statistics --- 00:09:20.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.540 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:09:20.540 15:55:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:20.540 15:55:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:09:20.540 15:55:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:20.540 15:55:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:20.540 15:55:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:20.540 15:55:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:20.540 15:55:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:20.540 15:55:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:20.540 15:55:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:20.540 15:55:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:20.540 15:55:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:20.540 15:55:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:20.540 15:55:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:20.540 15:55:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:20.540 15:55:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:20.540 15:55:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=72996 00:09:20.540 15:55:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:20.540 15:55:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 72996 00:09:20.540 15:55:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 72996 ']' 00:09:20.540 15:55:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.540 15:55:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:20.540 15:55:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.540 15:55:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:20.540 15:55:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:20.540 [2024-07-15 15:55:14.180043] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:09:20.540 [2024-07-15 15:55:14.180156] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.799 [2024-07-15 15:55:14.314860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:20.799 [2024-07-15 15:55:14.462075] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:20.799 [2024-07-15 15:55:14.462155] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:20.799 [2024-07-15 15:55:14.462172] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:20.799 [2024-07-15 15:55:14.462184] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:20.799 [2024-07-15 15:55:14.462194] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:20.799 [2024-07-15 15:55:14.462304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:20.799 [2024-07-15 15:55:14.463249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:20.799 [2024-07-15 15:55:14.463360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:20.799 [2024-07-15 15:55:14.463369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:21.733 [2024-07-15 15:55:15.287691] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:21.733 Malloc0 00:09:21.733 [2024-07-15 15:55:15.362764] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:21.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=73068 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 73068 /var/tmp/bdevperf.sock 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 73068 ']' 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:21.733 { 00:09:21.733 "params": { 00:09:21.733 "name": "Nvme$subsystem", 00:09:21.733 "trtype": "$TEST_TRANSPORT", 00:09:21.733 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:21.733 "adrfam": "ipv4", 00:09:21.733 "trsvcid": "$NVMF_PORT", 00:09:21.733 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:21.733 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:21.733 "hdgst": ${hdgst:-false}, 00:09:21.733 "ddgst": ${ddgst:-false} 00:09:21.733 }, 00:09:21.733 "method": "bdev_nvme_attach_controller" 00:09:21.733 } 00:09:21.733 EOF 00:09:21.733 )") 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:21.733 15:55:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:21.733 "params": { 00:09:21.733 "name": "Nvme0", 00:09:21.733 "trtype": "tcp", 00:09:21.733 "traddr": "10.0.0.2", 00:09:21.733 "adrfam": "ipv4", 00:09:21.733 "trsvcid": "4420", 00:09:21.733 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:21.733 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:21.733 "hdgst": false, 00:09:21.733 "ddgst": false 00:09:21.733 }, 00:09:21.733 "method": "bdev_nvme_attach_controller" 00:09:21.733 }' 00:09:21.991 [2024-07-15 15:55:15.471273] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:09:21.991 [2024-07-15 15:55:15.471369] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73068 ] 00:09:21.991 [2024-07-15 15:55:15.615696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.249 [2024-07-15 15:55:15.745531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.249 Running I/O for 10 seconds... 00:09:22.816 15:55:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:22.816 15:55:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:09:22.816 15:55:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:22.816 15:55:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.816 15:55:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:22.816 15:55:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.816 15:55:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:22.816 15:55:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:22.816 15:55:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:22.816 15:55:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:22.816 15:55:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:22.816 15:55:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:22.816 15:55:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:22.816 15:55:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:22.816 15:55:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:22.816 15:55:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.816 15:55:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:22.816 15:55:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:23.075 15:55:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.075 15:55:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=899 00:09:23.075 15:55:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 899 -ge 100 ']' 00:09:23.075 15:55:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:23.075 15:55:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:23.075 15:55:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:23.075 15:55:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:23.075 15:55:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.075 15:55:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:23.075 [2024-07-15 15:55:16.587815] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0e310 is same with the state(5) to be set 00:09:23.075 [2024-07-15 15:55:16.587873] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0e310 is same with the state(5) to be set 00:09:23.075 15:55:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.075 15:55:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:23.075 15:55:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.075 15:55:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:23.075 15:55:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.075 15:55:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:23.075 [2024-07-15 15:55:16.600699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:23.075 [2024-07-15 15:55:16.600740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.075 [2024-07-15 15:55:16.600754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:09:23.075 [2024-07-15 15:55:16.600764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.075 [2024-07-15 15:55:16.600774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:09:23.075 [2024-07-15 15:55:16.600784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.075 [2024-07-15 15:55:16.600794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:09:23.075 [2024-07-15 15:55:16.600803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.075 [2024-07-15 15:55:16.600813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10dfaf0 is same with the state(5) to be set 00:09:23.075 [2024-07-15 15:55:16.610039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.075 [2024-07-15 15:55:16.610076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.075 [2024-07-15 15:55:16.610097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.075 [2024-07-15 15:55:16.610107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.075 [2024-07-15 15:55:16.610119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.075 [2024-07-15 15:55:16.610129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.075 [2024-07-15 15:55:16.610140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.075 [2024-07-15 15:55:16.610150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.075 [2024-07-15 15:55:16.610161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.075 [2024-07-15 15:55:16.610170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.075 [2024-07-15 15:55:16.610181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.075 [2024-07-15 15:55:16.610190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.075 [2024-07-15 15:55:16.610201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.075 [2024-07-15 15:55:16.610210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.075 [2024-07-15 15:55:16.610221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.075 [2024-07-15 15:55:16.610230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.075 [2024-07-15 15:55:16.610248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.075 [2024-07-15 15:55:16.610256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.075 [2024-07-15 15:55:16.610267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.075 [2024-07-15 15:55:16.610276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.075 [2024-07-15 15:55:16.610287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.075 [2024-07-15 15:55:16.610296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.075 [2024-07-15 15:55:16.610307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.075 [2024-07-15 15:55:16.610317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.075 [2024-07-15 15:55:16.610328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.075 [2024-07-15 15:55:16.610344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.075 [2024-07-15 15:55:16.610356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.075 [2024-07-15 15:55:16.610365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.075 [2024-07-15 15:55:16.610385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.075 [2024-07-15 15:55:16.610394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.075 [2024-07-15 15:55:16.610414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.610434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.610445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.610464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.610485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.610494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.610505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.610514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.610526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.610535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.610546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.610556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.610567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.610576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.610592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.610601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.610612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.610621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.610632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.610641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.610652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.610661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.610672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.610681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.610692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.610701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.610712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.610726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.610737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.610746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.610758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.610766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.610777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.610786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.610797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.610816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.610827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.610836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.610847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.610856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.610867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.610877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.610898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.610907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.610918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.610927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.610938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.610947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.610977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.610988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.610999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.611009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.611019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.611028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.611048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.611058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.611069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.611078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.611089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.611103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.611114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.611124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.611135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.611144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.611155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.611164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.611175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.611184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.611195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.611204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.611215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.611224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.611236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.611245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.611257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.611266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.611277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.611285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.611296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.076 [2024-07-15 15:55:16.611305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.076 [2024-07-15 15:55:16.611316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.077 [2024-07-15 15:55:16.611326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.077 [2024-07-15 15:55:16.611337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.077 [2024-07-15 15:55:16.611346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.077 [2024-07-15 15:55:16.611357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.077 [2024-07-15 15:55:16.611366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.077 [2024-07-15 15:55:16.611377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.077 [2024-07-15 15:55:16.611386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.077 [2024-07-15 15:55:16.611397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.077 [2024-07-15 15:55:16.611406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.077 [2024-07-15 15:55:16.611416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.077 [2024-07-15 15:55:16.611430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.077 [2024-07-15 15:55:16.611442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.077 [2024-07-15 15:55:16.611451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.077 [2024-07-15 15:55:16.611462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.077 [2024-07-15 15:55:16.611471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.077 [2024-07-15 15:55:16.611482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.077 [2024-07-15 15:55:16.611491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.077 [2024-07-15 15:55:16.611566] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10df820 was disconnected and freed. reset controller. 00:09:23.077 [2024-07-15 15:55:16.611595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10dfaf0 (9): Bad file descriptor 00:09:23.077 [2024-07-15 15:55:16.612718] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:09:23.077 task offset: 0 on job bdev=Nvme0n1 fails 00:09:23.077 00:09:23.077 Latency(us) 00:09:23.077 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.077 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:23.077 Job: Nvme0n1 ended in about 0.69 seconds with error 00:09:23.077 Verification LBA range: start 0x0 length 0x400 00:09:23.077 Nvme0n1 : 0.69 1494.21 93.39 93.39 0.00 39297.79 1906.50 37653.41 00:09:23.077 =================================================================================================================== 00:09:23.077 Total : 1494.21 93.39 93.39 0.00 39297.79 1906.50 37653.41 00:09:23.077 [2024-07-15 15:55:16.614623] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:23.077 [2024-07-15 15:55:16.625407] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:24.011 15:55:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 73068 00:09:24.011 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (73068) - No such process 00:09:24.011 15:55:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:24.011 15:55:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:24.011 15:55:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:24.011 15:55:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:24.011 15:55:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:24.011 15:55:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:24.011 15:55:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:24.011 15:55:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:24.011 { 00:09:24.011 "params": { 00:09:24.011 "name": "Nvme$subsystem", 00:09:24.011 "trtype": "$TEST_TRANSPORT", 00:09:24.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:24.011 "adrfam": "ipv4", 00:09:24.011 "trsvcid": "$NVMF_PORT", 00:09:24.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:24.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:24.011 "hdgst": ${hdgst:-false}, 00:09:24.011 "ddgst": ${ddgst:-false} 00:09:24.011 }, 00:09:24.011 "method": "bdev_nvme_attach_controller" 00:09:24.011 } 00:09:24.011 EOF 00:09:24.011 )") 00:09:24.011 15:55:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:24.011 15:55:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:24.011 15:55:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:24.011 15:55:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:24.011 "params": { 00:09:24.011 "name": "Nvme0", 00:09:24.011 "trtype": "tcp", 00:09:24.011 "traddr": "10.0.0.2", 00:09:24.011 "adrfam": "ipv4", 00:09:24.011 "trsvcid": "4420", 00:09:24.011 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:24.011 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:24.011 "hdgst": false, 00:09:24.011 "ddgst": false 00:09:24.011 }, 00:09:24.011 "method": "bdev_nvme_attach_controller" 00:09:24.011 }' 00:09:24.011 [2024-07-15 15:55:17.661999] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:09:24.011 [2024-07-15 15:55:17.662123] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73118 ] 00:09:24.268 [2024-07-15 15:55:17.803754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.268 [2024-07-15 15:55:17.893703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.525 Running I/O for 1 seconds... 00:09:25.454 00:09:25.454 Latency(us) 00:09:25.454 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.454 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:25.454 Verification LBA range: start 0x0 length 0x400 00:09:25.454 Nvme0n1 : 1.02 1561.76 97.61 0.00 0.00 40159.06 5093.93 37891.72 00:09:25.454 =================================================================================================================== 00:09:25.454 Total : 1561.76 97.61 0.00 0.00 40159.06 5093.93 37891.72 00:09:25.711 15:55:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:25.711 15:55:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:25.711 15:55:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:09:25.711 15:55:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:25.711 15:55:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:25.711 15:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:25.711 15:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:09:25.711 15:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:25.711 15:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:09:25.711 15:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:25.711 15:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:25.711 rmmod nvme_tcp 00:09:25.711 rmmod nvme_fabrics 00:09:25.711 rmmod nvme_keyring 00:09:25.711 15:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:25.711 15:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:09:25.711 15:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:09:25.711 15:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 72996 ']' 00:09:25.711 15:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 72996 00:09:25.711 15:55:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 72996 ']' 00:09:25.711 15:55:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 72996 00:09:25.711 15:55:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:09:25.711 15:55:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:25.711 15:55:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72996 00:09:26.001 15:55:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:26.001 15:55:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:26.001 15:55:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72996' 00:09:26.001 killing process with pid 72996 00:09:26.001 15:55:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 72996 00:09:26.001 15:55:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 72996 00:09:26.001 [2024-07-15 15:55:19.680900] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:26.001 15:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:26.001 15:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:26.001 15:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:26.001 15:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:26.001 15:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:26.001 15:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.001 15:55:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:26.001 15:55:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.259 15:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:26.259 15:55:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:26.259 00:09:26.259 real 0m6.104s 00:09:26.259 user 0m23.902s 00:09:26.259 sys 0m1.406s 00:09:26.259 15:55:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:26.259 15:55:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:26.259 ************************************ 00:09:26.259 END TEST nvmf_host_management 00:09:26.259 ************************************ 00:09:26.259 15:55:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:26.259 15:55:19 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:26.259 15:55:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:26.259 15:55:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:26.259 15:55:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:26.259 ************************************ 00:09:26.259 START TEST nvmf_lvol 00:09:26.259 ************************************ 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:26.259 * Looking for test storage... 00:09:26.259 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:26.259 Cannot find device "nvmf_tgt_br" 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:26.259 Cannot find device "nvmf_tgt_br2" 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:26.259 Cannot find device "nvmf_tgt_br" 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:26.259 Cannot find device "nvmf_tgt_br2" 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:09:26.259 15:55:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:26.517 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:26.517 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:26.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:26.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:09:26.517 00:09:26.517 --- 10.0.0.2 ping statistics --- 00:09:26.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.517 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:26.517 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:26.517 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:09:26.517 00:09:26.517 --- 10.0.0.3 ping statistics --- 00:09:26.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.517 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:26.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:26.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:09:26.517 00:09:26.517 --- 10.0.0.1 ping statistics --- 00:09:26.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.517 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:26.517 15:55:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:26.775 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=73332 00:09:26.775 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:26.775 15:55:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 73332 00:09:26.775 15:55:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 73332 ']' 00:09:26.775 15:55:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.775 15:55:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:26.775 15:55:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.775 15:55:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:26.775 15:55:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:26.775 [2024-07-15 15:55:20.311248] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:09:26.775 [2024-07-15 15:55:20.311798] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.775 [2024-07-15 15:55:20.454432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:27.033 [2024-07-15 15:55:20.589407] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:27.033 [2024-07-15 15:55:20.589463] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:27.033 [2024-07-15 15:55:20.589477] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:27.033 [2024-07-15 15:55:20.589492] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:27.033 [2024-07-15 15:55:20.589506] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:27.033 [2024-07-15 15:55:20.589683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.033 [2024-07-15 15:55:20.590461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:27.033 [2024-07-15 15:55:20.590532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.967 15:55:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:27.967 15:55:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:09:27.967 15:55:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:27.967 15:55:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:27.967 15:55:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:27.967 15:55:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:27.967 15:55:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:27.967 [2024-07-15 15:55:21.678128] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:28.225 15:55:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:28.483 15:55:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:28.483 15:55:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:28.742 15:55:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:28.742 15:55:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:29.000 15:55:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:29.258 15:55:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c9113098-6fac-4b1b-948b-bafff6d65b64 00:09:29.258 15:55:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c9113098-6fac-4b1b-948b-bafff6d65b64 lvol 20 00:09:29.517 15:55:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a8532b66-a62c-466d-9f64-f0d30e78b0c4 00:09:29.517 15:55:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:29.775 15:55:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a8532b66-a62c-466d-9f64-f0d30e78b0c4 00:09:30.034 15:55:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:30.293 [2024-07-15 15:55:23.947923] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.293 15:55:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:30.558 15:55:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:30.558 15:55:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=73485 00:09:30.558 15:55:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:31.942 15:55:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot a8532b66-a62c-466d-9f64-f0d30e78b0c4 MY_SNAPSHOT 00:09:31.942 15:55:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=ec769506-9942-4798-a230-f195ddca9190 00:09:31.942 15:55:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize a8532b66-a62c-466d-9f64-f0d30e78b0c4 30 00:09:32.204 15:55:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone ec769506-9942-4798-a230-f195ddca9190 MY_CLONE 00:09:32.773 15:55:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=90470637-35e3-426a-8f21-4ae6baa804b2 00:09:32.773 15:55:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 90470637-35e3-426a-8f21-4ae6baa804b2 00:09:33.342 15:55:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 73485 00:09:41.480 Initializing NVMe Controllers 00:09:41.480 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:41.480 Controller IO queue size 128, less than required. 00:09:41.480 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:41.480 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:41.480 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:41.480 Initialization complete. Launching workers. 00:09:41.480 ======================================================== 00:09:41.480 Latency(us) 00:09:41.480 Device Information : IOPS MiB/s Average min max 00:09:41.480 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10362.00 40.48 12358.77 1547.65 62522.25 00:09:41.480 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10442.20 40.79 12258.03 3485.82 79488.72 00:09:41.480 ======================================================== 00:09:41.480 Total : 20804.20 81.27 12308.20 1547.65 79488.72 00:09:41.480 00:09:41.480 15:55:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:41.480 15:55:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete a8532b66-a62c-466d-9f64-f0d30e78b0c4 00:09:41.480 15:55:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c9113098-6fac-4b1b-948b-bafff6d65b64 00:09:41.739 15:55:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:41.739 15:55:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:41.739 15:55:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:41.739 15:55:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:41.739 15:55:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:09:41.739 15:55:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:41.739 15:55:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:09:41.739 15:55:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:41.739 15:55:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:41.739 rmmod nvme_tcp 00:09:41.739 rmmod nvme_fabrics 00:09:41.739 rmmod nvme_keyring 00:09:41.739 15:55:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:41.739 15:55:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:09:41.739 15:55:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:09:41.739 15:55:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 73332 ']' 00:09:41.739 15:55:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 73332 00:09:41.739 15:55:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 73332 ']' 00:09:41.739 15:55:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 73332 00:09:41.739 15:55:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:09:41.739 15:55:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:41.739 15:55:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73332 00:09:41.998 15:55:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:41.998 15:55:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:41.998 killing process with pid 73332 00:09:41.998 15:55:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73332' 00:09:41.998 15:55:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 73332 00:09:41.998 15:55:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 73332 00:09:42.256 15:55:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:42.256 15:55:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:42.256 15:55:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:42.256 15:55:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:42.256 15:55:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:42.256 15:55:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.256 15:55:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:42.256 15:55:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.256 15:55:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:42.256 00:09:42.256 real 0m16.011s 00:09:42.256 user 1m6.655s 00:09:42.256 sys 0m4.077s 00:09:42.256 15:55:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:42.256 15:55:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:42.256 ************************************ 00:09:42.256 END TEST nvmf_lvol 00:09:42.256 ************************************ 00:09:42.256 15:55:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:42.256 15:55:35 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:42.256 15:55:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:42.256 15:55:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:42.256 15:55:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:42.256 ************************************ 00:09:42.256 START TEST nvmf_lvs_grow 00:09:42.256 ************************************ 00:09:42.256 15:55:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:42.256 * Looking for test storage... 00:09:42.256 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:42.256 15:55:35 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:42.256 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:42.256 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:42.256 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:42.256 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:42.256 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:42.256 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:42.256 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:42.256 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:42.256 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:42.256 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:42.256 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:42.256 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:09:42.256 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:09:42.256 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:42.256 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:42.256 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:42.256 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:42.256 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:42.256 15:55:35 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:42.256 15:55:35 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:42.257 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:42.515 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:42.515 Cannot find device "nvmf_tgt_br" 00:09:42.515 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:09:42.515 15:55:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:42.515 Cannot find device "nvmf_tgt_br2" 00:09:42.515 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:09:42.515 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:42.515 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:42.515 Cannot find device "nvmf_tgt_br" 00:09:42.515 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:09:42.515 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:42.515 Cannot find device "nvmf_tgt_br2" 00:09:42.515 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:09:42.515 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:42.515 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:42.515 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:42.515 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:42.515 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:09:42.515 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:42.515 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:42.515 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:09:42.515 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:42.515 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:42.515 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:42.515 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:42.515 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:42.515 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:42.515 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:42.515 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:42.515 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:42.515 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:42.515 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:42.515 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:42.515 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:42.515 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:42.515 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:42.515 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:42.515 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:42.515 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:42.515 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:42.515 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:42.515 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:42.774 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:42.774 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:42.774 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:42.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:42.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:09:42.774 00:09:42.774 --- 10.0.0.2 ping statistics --- 00:09:42.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.774 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:09:42.774 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:42.774 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:42.774 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:09:42.774 00:09:42.774 --- 10.0.0.3 ping statistics --- 00:09:42.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.774 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:09:42.774 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:42.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:42.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:09:42.774 00:09:42.774 --- 10.0.0.1 ping statistics --- 00:09:42.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.774 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:09:42.774 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:42.774 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:09:42.774 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:42.774 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:42.774 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:42.774 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:42.774 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:42.774 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:42.774 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:42.774 15:55:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:42.774 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:42.774 15:55:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:42.774 15:55:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:42.774 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=73842 00:09:42.774 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:42.774 15:55:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 73842 00:09:42.774 15:55:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 73842 ']' 00:09:42.774 15:55:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.774 15:55:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:42.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.774 15:55:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.774 15:55:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:42.774 15:55:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:42.774 [2024-07-15 15:55:36.361239] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:09:42.774 [2024-07-15 15:55:36.361564] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.033 [2024-07-15 15:55:36.502811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.033 [2024-07-15 15:55:36.593444] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:43.033 [2024-07-15 15:55:36.593494] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:43.033 [2024-07-15 15:55:36.593503] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:43.033 [2024-07-15 15:55:36.593510] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:43.033 [2024-07-15 15:55:36.593517] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:43.033 [2024-07-15 15:55:36.593545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.967 15:55:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:43.967 15:55:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:09:43.967 15:55:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:43.967 15:55:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:43.967 15:55:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:43.967 15:55:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:43.967 15:55:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:44.224 [2024-07-15 15:55:37.700035] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:44.224 15:55:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:44.224 15:55:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:44.224 15:55:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:44.224 15:55:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:44.224 ************************************ 00:09:44.224 START TEST lvs_grow_clean 00:09:44.224 ************************************ 00:09:44.224 15:55:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:09:44.224 15:55:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:44.224 15:55:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:44.224 15:55:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:44.224 15:55:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:44.224 15:55:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:44.224 15:55:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:44.224 15:55:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:44.224 15:55:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:44.224 15:55:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:44.481 15:55:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:44.481 15:55:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:44.739 15:55:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=cbdd2d2a-c5ef-4827-a251-00d102bba548 00:09:44.739 15:55:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbdd2d2a-c5ef-4827-a251-00d102bba548 00:09:44.739 15:55:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:44.997 15:55:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:44.997 15:55:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:44.997 15:55:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u cbdd2d2a-c5ef-4827-a251-00d102bba548 lvol 150 00:09:45.256 15:55:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=dac2fa93-ae90-4d31-ba96-bdb442d64828 00:09:45.256 15:55:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:45.256 15:55:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:45.515 [2024-07-15 15:55:39.213893] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:45.515 [2024-07-15 15:55:39.214007] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:45.515 true 00:09:45.772 15:55:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbdd2d2a-c5ef-4827-a251-00d102bba548 00:09:45.772 15:55:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:46.030 15:55:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:46.030 15:55:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:46.030 15:55:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dac2fa93-ae90-4d31-ba96-bdb442d64828 00:09:46.291 15:55:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:46.548 [2024-07-15 15:55:40.218528] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:46.548 15:55:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:46.807 15:55:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:46.807 15:55:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=74009 00:09:46.807 15:55:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:46.807 15:55:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 74009 /var/tmp/bdevperf.sock 00:09:46.807 15:55:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 74009 ']' 00:09:46.807 15:55:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:46.807 15:55:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:46.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:46.807 15:55:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:46.807 15:55:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:46.807 15:55:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:46.807 [2024-07-15 15:55:40.527996] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:09:46.807 [2024-07-15 15:55:40.528102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74009 ] 00:09:47.065 [2024-07-15 15:55:40.665953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.323 [2024-07-15 15:55:40.796850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.943 15:55:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:47.943 15:55:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:09:47.943 15:55:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:48.200 Nvme0n1 00:09:48.200 15:55:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:48.458 [ 00:09:48.458 { 00:09:48.458 "aliases": [ 00:09:48.458 "dac2fa93-ae90-4d31-ba96-bdb442d64828" 00:09:48.458 ], 00:09:48.458 "assigned_rate_limits": { 00:09:48.458 "r_mbytes_per_sec": 0, 00:09:48.458 "rw_ios_per_sec": 0, 00:09:48.458 "rw_mbytes_per_sec": 0, 00:09:48.458 "w_mbytes_per_sec": 0 00:09:48.458 }, 00:09:48.458 "block_size": 4096, 00:09:48.458 "claimed": false, 00:09:48.458 "driver_specific": { 00:09:48.458 "mp_policy": "active_passive", 00:09:48.458 "nvme": [ 00:09:48.458 { 00:09:48.458 "ctrlr_data": { 00:09:48.458 "ana_reporting": false, 00:09:48.458 "cntlid": 1, 00:09:48.458 "firmware_revision": "24.09", 00:09:48.458 "model_number": "SPDK bdev Controller", 00:09:48.458 "multi_ctrlr": true, 00:09:48.458 "oacs": { 00:09:48.458 "firmware": 0, 00:09:48.458 "format": 0, 00:09:48.458 "ns_manage": 0, 00:09:48.458 "security": 0 00:09:48.458 }, 00:09:48.458 "serial_number": "SPDK0", 00:09:48.458 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:48.458 "vendor_id": "0x8086" 00:09:48.458 }, 00:09:48.458 "ns_data": { 00:09:48.458 "can_share": true, 00:09:48.458 "id": 1 00:09:48.458 }, 00:09:48.458 "trid": { 00:09:48.458 "adrfam": "IPv4", 00:09:48.458 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:48.458 "traddr": "10.0.0.2", 00:09:48.458 "trsvcid": "4420", 00:09:48.458 "trtype": "TCP" 00:09:48.458 }, 00:09:48.458 "vs": { 00:09:48.458 "nvme_version": "1.3" 00:09:48.458 } 00:09:48.458 } 00:09:48.458 ] 00:09:48.458 }, 00:09:48.458 "memory_domains": [ 00:09:48.458 { 00:09:48.458 "dma_device_id": "system", 00:09:48.458 "dma_device_type": 1 00:09:48.458 } 00:09:48.458 ], 00:09:48.458 "name": "Nvme0n1", 00:09:48.458 "num_blocks": 38912, 00:09:48.458 "product_name": "NVMe disk", 00:09:48.458 "supported_io_types": { 00:09:48.458 "abort": true, 00:09:48.458 "compare": true, 00:09:48.458 "compare_and_write": true, 00:09:48.458 "copy": true, 00:09:48.458 "flush": true, 00:09:48.458 "get_zone_info": false, 00:09:48.458 "nvme_admin": true, 00:09:48.458 "nvme_io": true, 00:09:48.458 "nvme_io_md": false, 00:09:48.458 "nvme_iov_md": false, 00:09:48.458 "read": true, 00:09:48.458 "reset": true, 00:09:48.458 "seek_data": false, 00:09:48.458 "seek_hole": false, 00:09:48.458 "unmap": true, 00:09:48.458 "write": true, 00:09:48.458 "write_zeroes": true, 00:09:48.458 "zcopy": false, 00:09:48.458 "zone_append": false, 00:09:48.458 "zone_management": false 00:09:48.458 }, 00:09:48.458 "uuid": "dac2fa93-ae90-4d31-ba96-bdb442d64828", 00:09:48.458 "zoned": false 00:09:48.458 } 00:09:48.458 ] 00:09:48.458 15:55:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:48.458 15:55:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=74057 00:09:48.458 15:55:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:48.715 Running I/O for 10 seconds... 00:09:49.646 Latency(us) 00:09:49.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:49.646 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:49.646 Nvme0n1 : 1.00 8448.00 33.00 0.00 0.00 0.00 0.00 0.00 00:09:49.646 =================================================================================================================== 00:09:49.646 Total : 8448.00 33.00 0.00 0.00 0.00 0.00 0.00 00:09:49.646 00:09:50.580 15:55:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cbdd2d2a-c5ef-4827-a251-00d102bba548 00:09:50.580 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:50.580 Nvme0n1 : 2.00 8473.50 33.10 0.00 0.00 0.00 0.00 0.00 00:09:50.580 =================================================================================================================== 00:09:50.580 Total : 8473.50 33.10 0.00 0.00 0.00 0.00 0.00 00:09:50.580 00:09:50.838 true 00:09:50.838 15:55:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbdd2d2a-c5ef-4827-a251-00d102bba548 00:09:50.838 15:55:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:51.405 15:55:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:51.405 15:55:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:51.405 15:55:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 74057 00:09:51.664 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:51.664 Nvme0n1 : 3.00 8541.33 33.36 0.00 0.00 0.00 0.00 0.00 00:09:51.664 =================================================================================================================== 00:09:51.664 Total : 8541.33 33.36 0.00 0.00 0.00 0.00 0.00 00:09:51.664 00:09:52.623 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:52.623 Nvme0n1 : 4.00 8505.75 33.23 0.00 0.00 0.00 0.00 0.00 00:09:52.623 =================================================================================================================== 00:09:52.623 Total : 8505.75 33.23 0.00 0.00 0.00 0.00 0.00 00:09:52.623 00:09:53.560 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:53.560 Nvme0n1 : 5.00 8458.60 33.04 0.00 0.00 0.00 0.00 0.00 00:09:53.560 =================================================================================================================== 00:09:53.560 Total : 8458.60 33.04 0.00 0.00 0.00 0.00 0.00 00:09:53.560 00:09:54.935 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:54.935 Nvme0n1 : 6.00 8345.17 32.60 0.00 0.00 0.00 0.00 0.00 00:09:54.935 =================================================================================================================== 00:09:54.935 Total : 8345.17 32.60 0.00 0.00 0.00 0.00 0.00 00:09:54.935 00:09:55.885 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:55.885 Nvme0n1 : 7.00 8220.71 32.11 0.00 0.00 0.00 0.00 0.00 00:09:55.886 =================================================================================================================== 00:09:55.886 Total : 8220.71 32.11 0.00 0.00 0.00 0.00 0.00 00:09:55.886 00:09:56.818 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:56.818 Nvme0n1 : 8.00 8171.88 31.92 0.00 0.00 0.00 0.00 0.00 00:09:56.818 =================================================================================================================== 00:09:56.818 Total : 8171.88 31.92 0.00 0.00 0.00 0.00 0.00 00:09:56.818 00:09:57.754 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:57.754 Nvme0n1 : 9.00 8116.11 31.70 0.00 0.00 0.00 0.00 0.00 00:09:57.754 =================================================================================================================== 00:09:57.754 Total : 8116.11 31.70 0.00 0.00 0.00 0.00 0.00 00:09:57.754 00:09:58.688 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:58.688 Nvme0n1 : 10.00 8035.10 31.39 0.00 0.00 0.00 0.00 0.00 00:09:58.688 =================================================================================================================== 00:09:58.688 Total : 8035.10 31.39 0.00 0.00 0.00 0.00 0.00 00:09:58.688 00:09:58.688 00:09:58.688 Latency(us) 00:09:58.689 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:58.689 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:58.689 Nvme0n1 : 10.00 8044.00 31.42 0.00 0.00 15907.46 7506.85 45041.11 00:09:58.689 =================================================================================================================== 00:09:58.689 Total : 8044.00 31.42 0.00 0.00 15907.46 7506.85 45041.11 00:09:58.689 0 00:09:58.689 15:55:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 74009 00:09:58.689 15:55:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 74009 ']' 00:09:58.689 15:55:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 74009 00:09:58.689 15:55:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:09:58.689 15:55:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:58.689 15:55:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74009 00:09:58.689 killing process with pid 74009 00:09:58.689 Received shutdown signal, test time was about 10.000000 seconds 00:09:58.689 00:09:58.689 Latency(us) 00:09:58.689 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:58.689 =================================================================================================================== 00:09:58.689 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:58.689 15:55:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:58.689 15:55:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:58.689 15:55:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74009' 00:09:58.689 15:55:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 74009 00:09:58.689 15:55:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 74009 00:09:58.946 15:55:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:59.204 15:55:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:59.477 15:55:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbdd2d2a-c5ef-4827-a251-00d102bba548 00:09:59.477 15:55:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:59.735 15:55:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:59.735 15:55:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:59.735 15:55:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:59.993 [2024-07-15 15:55:53.694688] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:00.251 15:55:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbdd2d2a-c5ef-4827-a251-00d102bba548 00:10:00.251 15:55:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:10:00.251 15:55:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbdd2d2a-c5ef-4827-a251-00d102bba548 00:10:00.251 15:55:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:00.251 15:55:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:00.251 15:55:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:00.251 15:55:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:00.251 15:55:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:00.251 15:55:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:00.251 15:55:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:00.251 15:55:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:00.251 15:55:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbdd2d2a-c5ef-4827-a251-00d102bba548 00:10:00.510 2024/07/15 15:55:54 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:cbdd2d2a-c5ef-4827-a251-00d102bba548], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:10:00.510 request: 00:10:00.510 { 00:10:00.510 "method": "bdev_lvol_get_lvstores", 00:10:00.510 "params": { 00:10:00.510 "uuid": "cbdd2d2a-c5ef-4827-a251-00d102bba548" 00:10:00.510 } 00:10:00.510 } 00:10:00.510 Got JSON-RPC error response 00:10:00.510 GoRPCClient: error on JSON-RPC call 00:10:00.510 15:55:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:10:00.510 15:55:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:00.510 15:55:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:00.510 15:55:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:00.510 15:55:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:00.834 aio_bdev 00:10:00.834 15:55:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev dac2fa93-ae90-4d31-ba96-bdb442d64828 00:10:00.834 15:55:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=dac2fa93-ae90-4d31-ba96-bdb442d64828 00:10:00.834 15:55:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:00.834 15:55:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:10:00.834 15:55:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:00.835 15:55:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:00.835 15:55:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:01.145 15:55:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dac2fa93-ae90-4d31-ba96-bdb442d64828 -t 2000 00:10:01.145 [ 00:10:01.145 { 00:10:01.145 "aliases": [ 00:10:01.145 "lvs/lvol" 00:10:01.145 ], 00:10:01.145 "assigned_rate_limits": { 00:10:01.145 "r_mbytes_per_sec": 0, 00:10:01.145 "rw_ios_per_sec": 0, 00:10:01.145 "rw_mbytes_per_sec": 0, 00:10:01.145 "w_mbytes_per_sec": 0 00:10:01.145 }, 00:10:01.145 "block_size": 4096, 00:10:01.145 "claimed": false, 00:10:01.145 "driver_specific": { 00:10:01.145 "lvol": { 00:10:01.145 "base_bdev": "aio_bdev", 00:10:01.145 "clone": false, 00:10:01.145 "esnap_clone": false, 00:10:01.145 "lvol_store_uuid": "cbdd2d2a-c5ef-4827-a251-00d102bba548", 00:10:01.145 "num_allocated_clusters": 38, 00:10:01.145 "snapshot": false, 00:10:01.145 "thin_provision": false 00:10:01.145 } 00:10:01.145 }, 00:10:01.145 "name": "dac2fa93-ae90-4d31-ba96-bdb442d64828", 00:10:01.145 "num_blocks": 38912, 00:10:01.145 "product_name": "Logical Volume", 00:10:01.145 "supported_io_types": { 00:10:01.145 "abort": false, 00:10:01.145 "compare": false, 00:10:01.145 "compare_and_write": false, 00:10:01.145 "copy": false, 00:10:01.145 "flush": false, 00:10:01.145 "get_zone_info": false, 00:10:01.145 "nvme_admin": false, 00:10:01.145 "nvme_io": false, 00:10:01.145 "nvme_io_md": false, 00:10:01.145 "nvme_iov_md": false, 00:10:01.145 "read": true, 00:10:01.145 "reset": true, 00:10:01.145 "seek_data": true, 00:10:01.145 "seek_hole": true, 00:10:01.145 "unmap": true, 00:10:01.145 "write": true, 00:10:01.145 "write_zeroes": true, 00:10:01.145 "zcopy": false, 00:10:01.145 "zone_append": false, 00:10:01.145 "zone_management": false 00:10:01.145 }, 00:10:01.145 "uuid": "dac2fa93-ae90-4d31-ba96-bdb442d64828", 00:10:01.145 "zoned": false 00:10:01.145 } 00:10:01.145 ] 00:10:01.145 15:55:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:10:01.145 15:55:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbdd2d2a-c5ef-4827-a251-00d102bba548 00:10:01.145 15:55:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:01.403 15:55:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:01.403 15:55:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:01.403 15:55:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbdd2d2a-c5ef-4827-a251-00d102bba548 00:10:01.661 15:55:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:01.661 15:55:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete dac2fa93-ae90-4d31-ba96-bdb442d64828 00:10:01.919 15:55:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cbdd2d2a-c5ef-4827-a251-00d102bba548 00:10:02.485 15:55:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:02.485 15:55:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:03.050 ************************************ 00:10:03.050 END TEST lvs_grow_clean 00:10:03.050 ************************************ 00:10:03.050 00:10:03.050 real 0m18.837s 00:10:03.050 user 0m18.196s 00:10:03.050 sys 0m2.302s 00:10:03.050 15:55:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:03.050 15:55:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:03.050 15:55:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:10:03.050 15:55:56 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:03.050 15:55:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:03.050 15:55:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:03.050 15:55:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:03.050 ************************************ 00:10:03.050 START TEST lvs_grow_dirty 00:10:03.050 ************************************ 00:10:03.050 15:55:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:10:03.050 15:55:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:03.050 15:55:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:03.050 15:55:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:03.050 15:55:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:03.050 15:55:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:03.050 15:55:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:03.050 15:55:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:03.050 15:55:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:03.050 15:55:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:03.308 15:55:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:03.308 15:55:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:03.566 15:55:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=74e5677a-efff-4f3c-b7d3-4f9188db692a 00:10:03.566 15:55:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74e5677a-efff-4f3c-b7d3-4f9188db692a 00:10:03.566 15:55:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:03.824 15:55:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:03.824 15:55:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:03.824 15:55:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 74e5677a-efff-4f3c-b7d3-4f9188db692a lvol 150 00:10:04.081 15:55:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=4a5355c2-610a-498d-8e8e-1b9003215214 00:10:04.081 15:55:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:04.081 15:55:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:04.339 [2024-07-15 15:55:58.018064] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:04.339 [2024-07-15 15:55:58.018144] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:04.339 true 00:10:04.340 15:55:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74e5677a-efff-4f3c-b7d3-4f9188db692a 00:10:04.340 15:55:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:04.905 15:55:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:04.905 15:55:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:04.905 15:55:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4a5355c2-610a-498d-8e8e-1b9003215214 00:10:05.163 15:55:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:05.429 [2024-07-15 15:55:59.127023] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:05.429 15:55:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:05.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:05.995 15:55:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=74461 00:10:05.995 15:55:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:05.995 15:55:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:05.995 15:55:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 74461 /var/tmp/bdevperf.sock 00:10:05.995 15:55:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 74461 ']' 00:10:05.995 15:55:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:05.995 15:55:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:05.995 15:55:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:05.995 15:55:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:05.995 15:55:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:05.995 [2024-07-15 15:55:59.514912] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:10:05.995 [2024-07-15 15:55:59.515043] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74461 ] 00:10:05.995 [2024-07-15 15:55:59.656184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.252 [2024-07-15 15:55:59.777517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.818 15:56:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:06.818 15:56:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:10:06.818 15:56:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:07.384 Nvme0n1 00:10:07.384 15:56:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:07.642 [ 00:10:07.642 { 00:10:07.642 "aliases": [ 00:10:07.642 "4a5355c2-610a-498d-8e8e-1b9003215214" 00:10:07.642 ], 00:10:07.642 "assigned_rate_limits": { 00:10:07.642 "r_mbytes_per_sec": 0, 00:10:07.642 "rw_ios_per_sec": 0, 00:10:07.642 "rw_mbytes_per_sec": 0, 00:10:07.642 "w_mbytes_per_sec": 0 00:10:07.642 }, 00:10:07.642 "block_size": 4096, 00:10:07.642 "claimed": false, 00:10:07.642 "driver_specific": { 00:10:07.642 "mp_policy": "active_passive", 00:10:07.642 "nvme": [ 00:10:07.642 { 00:10:07.642 "ctrlr_data": { 00:10:07.642 "ana_reporting": false, 00:10:07.642 "cntlid": 1, 00:10:07.642 "firmware_revision": "24.09", 00:10:07.642 "model_number": "SPDK bdev Controller", 00:10:07.642 "multi_ctrlr": true, 00:10:07.642 "oacs": { 00:10:07.642 "firmware": 0, 00:10:07.642 "format": 0, 00:10:07.642 "ns_manage": 0, 00:10:07.642 "security": 0 00:10:07.642 }, 00:10:07.642 "serial_number": "SPDK0", 00:10:07.642 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:07.642 "vendor_id": "0x8086" 00:10:07.642 }, 00:10:07.642 "ns_data": { 00:10:07.642 "can_share": true, 00:10:07.642 "id": 1 00:10:07.642 }, 00:10:07.642 "trid": { 00:10:07.642 "adrfam": "IPv4", 00:10:07.642 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:07.642 "traddr": "10.0.0.2", 00:10:07.642 "trsvcid": "4420", 00:10:07.642 "trtype": "TCP" 00:10:07.642 }, 00:10:07.642 "vs": { 00:10:07.642 "nvme_version": "1.3" 00:10:07.642 } 00:10:07.642 } 00:10:07.642 ] 00:10:07.642 }, 00:10:07.642 "memory_domains": [ 00:10:07.642 { 00:10:07.642 "dma_device_id": "system", 00:10:07.642 "dma_device_type": 1 00:10:07.642 } 00:10:07.642 ], 00:10:07.642 "name": "Nvme0n1", 00:10:07.642 "num_blocks": 38912, 00:10:07.643 "product_name": "NVMe disk", 00:10:07.643 "supported_io_types": { 00:10:07.643 "abort": true, 00:10:07.643 "compare": true, 00:10:07.643 "compare_and_write": true, 00:10:07.643 "copy": true, 00:10:07.643 "flush": true, 00:10:07.643 "get_zone_info": false, 00:10:07.643 "nvme_admin": true, 00:10:07.643 "nvme_io": true, 00:10:07.643 "nvme_io_md": false, 00:10:07.643 "nvme_iov_md": false, 00:10:07.643 "read": true, 00:10:07.643 "reset": true, 00:10:07.643 "seek_data": false, 00:10:07.643 "seek_hole": false, 00:10:07.643 "unmap": true, 00:10:07.643 "write": true, 00:10:07.643 "write_zeroes": true, 00:10:07.643 "zcopy": false, 00:10:07.643 "zone_append": false, 00:10:07.643 "zone_management": false 00:10:07.643 }, 00:10:07.643 "uuid": "4a5355c2-610a-498d-8e8e-1b9003215214", 00:10:07.643 "zoned": false 00:10:07.643 } 00:10:07.643 ] 00:10:07.643 15:56:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=74514 00:10:07.643 15:56:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:07.643 15:56:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:07.643 Running I/O for 10 seconds... 00:10:08.577 Latency(us) 00:10:08.577 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:08.577 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:08.577 Nvme0n1 : 1.00 7900.00 30.86 0.00 0.00 0.00 0.00 0.00 00:10:08.577 =================================================================================================================== 00:10:08.577 Total : 7900.00 30.86 0.00 0.00 0.00 0.00 0.00 00:10:08.577 00:10:09.510 15:56:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 74e5677a-efff-4f3c-b7d3-4f9188db692a 00:10:09.768 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:09.768 Nvme0n1 : 2.00 8111.50 31.69 0.00 0.00 0.00 0.00 0.00 00:10:09.768 =================================================================================================================== 00:10:09.768 Total : 8111.50 31.69 0.00 0.00 0.00 0.00 0.00 00:10:09.768 00:10:09.768 true 00:10:09.768 15:56:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74e5677a-efff-4f3c-b7d3-4f9188db692a 00:10:09.768 15:56:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:10.026 15:56:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:10.026 15:56:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:10.026 15:56:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 74514 00:10:10.593 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:10.593 Nvme0n1 : 3.00 8116.00 31.70 0.00 0.00 0.00 0.00 0.00 00:10:10.593 =================================================================================================================== 00:10:10.593 Total : 8116.00 31.70 0.00 0.00 0.00 0.00 0.00 00:10:10.593 00:10:11.528 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:11.528 Nvme0n1 : 4.00 8057.25 31.47 0.00 0.00 0.00 0.00 0.00 00:10:11.528 =================================================================================================================== 00:10:11.528 Total : 8057.25 31.47 0.00 0.00 0.00 0.00 0.00 00:10:11.528 00:10:12.973 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:12.973 Nvme0n1 : 5.00 8000.20 31.25 0.00 0.00 0.00 0.00 0.00 00:10:12.973 =================================================================================================================== 00:10:12.973 Total : 8000.20 31.25 0.00 0.00 0.00 0.00 0.00 00:10:12.973 00:10:13.538 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:13.538 Nvme0n1 : 6.00 7850.17 30.66 0.00 0.00 0.00 0.00 0.00 00:10:13.538 =================================================================================================================== 00:10:13.538 Total : 7850.17 30.66 0.00 0.00 0.00 0.00 0.00 00:10:13.538 00:10:14.913 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:14.913 Nvme0n1 : 7.00 7800.57 30.47 0.00 0.00 0.00 0.00 0.00 00:10:14.913 =================================================================================================================== 00:10:14.913 Total : 7800.57 30.47 0.00 0.00 0.00 0.00 0.00 00:10:14.913 00:10:15.848 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:15.848 Nvme0n1 : 8.00 7748.50 30.27 0.00 0.00 0.00 0.00 0.00 00:10:15.848 =================================================================================================================== 00:10:15.848 Total : 7748.50 30.27 0.00 0.00 0.00 0.00 0.00 00:10:15.848 00:10:16.782 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:16.782 Nvme0n1 : 9.00 7729.78 30.19 0.00 0.00 0.00 0.00 0.00 00:10:16.782 =================================================================================================================== 00:10:16.782 Total : 7729.78 30.19 0.00 0.00 0.00 0.00 0.00 00:10:16.782 00:10:17.738 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:17.738 Nvme0n1 : 10.00 7707.10 30.11 0.00 0.00 0.00 0.00 0.00 00:10:17.738 =================================================================================================================== 00:10:17.738 Total : 7707.10 30.11 0.00 0.00 0.00 0.00 0.00 00:10:17.738 00:10:17.738 00:10:17.738 Latency(us) 00:10:17.738 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.738 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:17.738 Nvme0n1 : 10.00 7711.17 30.12 0.00 0.00 16592.88 7238.75 68157.44 00:10:17.738 =================================================================================================================== 00:10:17.738 Total : 7711.17 30.12 0.00 0.00 16592.88 7238.75 68157.44 00:10:17.738 0 00:10:17.738 15:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 74461 00:10:17.738 15:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 74461 ']' 00:10:17.738 15:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 74461 00:10:17.738 15:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:10:17.738 15:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:17.738 15:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74461 00:10:17.739 killing process with pid 74461 00:10:17.739 Received shutdown signal, test time was about 10.000000 seconds 00:10:17.739 00:10:17.739 Latency(us) 00:10:17.739 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.739 =================================================================================================================== 00:10:17.739 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:17.739 15:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:17.739 15:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:17.739 15:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74461' 00:10:17.739 15:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 74461 00:10:17.739 15:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 74461 00:10:17.997 15:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:18.255 15:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:18.513 15:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74e5677a-efff-4f3c-b7d3-4f9188db692a 00:10:18.513 15:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:18.771 15:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:18.771 15:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:18.771 15:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 73842 00:10:18.771 15:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 73842 00:10:18.771 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 73842 Killed "${NVMF_APP[@]}" "$@" 00:10:18.771 15:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:18.771 15:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:18.771 15:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:18.771 15:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:18.771 15:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:18.771 15:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=74678 00:10:18.771 15:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 74678 00:10:18.771 15:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:18.771 15:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 74678 ']' 00:10:18.771 15:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.771 15:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:18.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.771 15:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.771 15:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:18.771 15:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:18.771 [2024-07-15 15:56:12.470923] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:10:18.771 [2024-07-15 15:56:12.471059] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:19.029 [2024-07-15 15:56:12.615089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.029 [2024-07-15 15:56:12.722691] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:19.029 [2024-07-15 15:56:12.722812] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:19.029 [2024-07-15 15:56:12.722838] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:19.029 [2024-07-15 15:56:12.722861] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:19.029 [2024-07-15 15:56:12.722867] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:19.029 [2024-07-15 15:56:12.722896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.968 15:56:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:19.968 15:56:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:10:19.968 15:56:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:19.968 15:56:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:19.968 15:56:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:19.968 15:56:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:19.968 15:56:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:20.225 [2024-07-15 15:56:13.785059] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:20.225 [2024-07-15 15:56:13.785344] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:20.225 [2024-07-15 15:56:13.785624] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:20.225 15:56:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:20.225 15:56:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 4a5355c2-610a-498d-8e8e-1b9003215214 00:10:20.225 15:56:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=4a5355c2-610a-498d-8e8e-1b9003215214 00:10:20.225 15:56:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:20.225 15:56:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:10:20.225 15:56:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:20.225 15:56:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:20.225 15:56:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:20.483 15:56:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4a5355c2-610a-498d-8e8e-1b9003215214 -t 2000 00:10:20.742 [ 00:10:20.742 { 00:10:20.742 "aliases": [ 00:10:20.742 "lvs/lvol" 00:10:20.742 ], 00:10:20.742 "assigned_rate_limits": { 00:10:20.742 "r_mbytes_per_sec": 0, 00:10:20.742 "rw_ios_per_sec": 0, 00:10:20.742 "rw_mbytes_per_sec": 0, 00:10:20.742 "w_mbytes_per_sec": 0 00:10:20.742 }, 00:10:20.742 "block_size": 4096, 00:10:20.742 "claimed": false, 00:10:20.742 "driver_specific": { 00:10:20.742 "lvol": { 00:10:20.742 "base_bdev": "aio_bdev", 00:10:20.742 "clone": false, 00:10:20.742 "esnap_clone": false, 00:10:20.742 "lvol_store_uuid": "74e5677a-efff-4f3c-b7d3-4f9188db692a", 00:10:20.742 "num_allocated_clusters": 38, 00:10:20.742 "snapshot": false, 00:10:20.742 "thin_provision": false 00:10:20.742 } 00:10:20.742 }, 00:10:20.742 "name": "4a5355c2-610a-498d-8e8e-1b9003215214", 00:10:20.742 "num_blocks": 38912, 00:10:20.742 "product_name": "Logical Volume", 00:10:20.742 "supported_io_types": { 00:10:20.742 "abort": false, 00:10:20.742 "compare": false, 00:10:20.742 "compare_and_write": false, 00:10:20.742 "copy": false, 00:10:20.742 "flush": false, 00:10:20.742 "get_zone_info": false, 00:10:20.742 "nvme_admin": false, 00:10:20.742 "nvme_io": false, 00:10:20.742 "nvme_io_md": false, 00:10:20.742 "nvme_iov_md": false, 00:10:20.742 "read": true, 00:10:20.742 "reset": true, 00:10:20.742 "seek_data": true, 00:10:20.742 "seek_hole": true, 00:10:20.742 "unmap": true, 00:10:20.742 "write": true, 00:10:20.742 "write_zeroes": true, 00:10:20.742 "zcopy": false, 00:10:20.742 "zone_append": false, 00:10:20.742 "zone_management": false 00:10:20.742 }, 00:10:20.742 "uuid": "4a5355c2-610a-498d-8e8e-1b9003215214", 00:10:20.742 "zoned": false 00:10:20.742 } 00:10:20.742 ] 00:10:20.742 15:56:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:10:20.742 15:56:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74e5677a-efff-4f3c-b7d3-4f9188db692a 00:10:20.742 15:56:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:21.309 15:56:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:21.309 15:56:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74e5677a-efff-4f3c-b7d3-4f9188db692a 00:10:21.309 15:56:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:21.567 15:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:21.567 15:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:21.825 [2024-07-15 15:56:15.335038] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:21.825 15:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74e5677a-efff-4f3c-b7d3-4f9188db692a 00:10:21.825 15:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:10:21.825 15:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74e5677a-efff-4f3c-b7d3-4f9188db692a 00:10:21.825 15:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:21.825 15:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:21.825 15:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:21.825 15:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:21.825 15:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:21.825 15:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:21.825 15:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:21.825 15:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:21.825 15:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74e5677a-efff-4f3c-b7d3-4f9188db692a 00:10:22.083 2024/07/15 15:56:15 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:74e5677a-efff-4f3c-b7d3-4f9188db692a], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:10:22.083 request: 00:10:22.083 { 00:10:22.083 "method": "bdev_lvol_get_lvstores", 00:10:22.083 "params": { 00:10:22.083 "uuid": "74e5677a-efff-4f3c-b7d3-4f9188db692a" 00:10:22.083 } 00:10:22.083 } 00:10:22.083 Got JSON-RPC error response 00:10:22.083 GoRPCClient: error on JSON-RPC call 00:10:22.083 15:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:10:22.083 15:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:22.083 15:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:22.083 15:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:22.083 15:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:22.341 aio_bdev 00:10:22.341 15:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4a5355c2-610a-498d-8e8e-1b9003215214 00:10:22.341 15:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=4a5355c2-610a-498d-8e8e-1b9003215214 00:10:22.341 15:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:22.341 15:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:10:22.341 15:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:22.341 15:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:22.341 15:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:22.599 15:56:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4a5355c2-610a-498d-8e8e-1b9003215214 -t 2000 00:10:22.858 [ 00:10:22.858 { 00:10:22.858 "aliases": [ 00:10:22.858 "lvs/lvol" 00:10:22.858 ], 00:10:22.858 "assigned_rate_limits": { 00:10:22.858 "r_mbytes_per_sec": 0, 00:10:22.858 "rw_ios_per_sec": 0, 00:10:22.858 "rw_mbytes_per_sec": 0, 00:10:22.858 "w_mbytes_per_sec": 0 00:10:22.858 }, 00:10:22.858 "block_size": 4096, 00:10:22.858 "claimed": false, 00:10:22.858 "driver_specific": { 00:10:22.858 "lvol": { 00:10:22.858 "base_bdev": "aio_bdev", 00:10:22.858 "clone": false, 00:10:22.858 "esnap_clone": false, 00:10:22.858 "lvol_store_uuid": "74e5677a-efff-4f3c-b7d3-4f9188db692a", 00:10:22.858 "num_allocated_clusters": 38, 00:10:22.858 "snapshot": false, 00:10:22.858 "thin_provision": false 00:10:22.858 } 00:10:22.858 }, 00:10:22.858 "name": "4a5355c2-610a-498d-8e8e-1b9003215214", 00:10:22.858 "num_blocks": 38912, 00:10:22.858 "product_name": "Logical Volume", 00:10:22.858 "supported_io_types": { 00:10:22.858 "abort": false, 00:10:22.858 "compare": false, 00:10:22.858 "compare_and_write": false, 00:10:22.858 "copy": false, 00:10:22.858 "flush": false, 00:10:22.858 "get_zone_info": false, 00:10:22.858 "nvme_admin": false, 00:10:22.858 "nvme_io": false, 00:10:22.858 "nvme_io_md": false, 00:10:22.858 "nvme_iov_md": false, 00:10:22.858 "read": true, 00:10:22.858 "reset": true, 00:10:22.858 "seek_data": true, 00:10:22.858 "seek_hole": true, 00:10:22.858 "unmap": true, 00:10:22.858 "write": true, 00:10:22.858 "write_zeroes": true, 00:10:22.858 "zcopy": false, 00:10:22.858 "zone_append": false, 00:10:22.858 "zone_management": false 00:10:22.858 }, 00:10:22.858 "uuid": "4a5355c2-610a-498d-8e8e-1b9003215214", 00:10:22.858 "zoned": false 00:10:22.858 } 00:10:22.858 ] 00:10:22.858 15:56:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:10:22.858 15:56:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74e5677a-efff-4f3c-b7d3-4f9188db692a 00:10:22.858 15:56:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:23.117 15:56:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:23.117 15:56:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74e5677a-efff-4f3c-b7d3-4f9188db692a 00:10:23.117 15:56:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:23.376 15:56:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:23.376 15:56:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4a5355c2-610a-498d-8e8e-1b9003215214 00:10:23.746 15:56:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 74e5677a-efff-4f3c-b7d3-4f9188db692a 00:10:24.006 15:56:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:24.265 15:56:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:24.524 ************************************ 00:10:24.524 END TEST lvs_grow_dirty 00:10:24.524 ************************************ 00:10:24.524 00:10:24.524 real 0m21.605s 00:10:24.524 user 0m44.411s 00:10:24.524 sys 0m8.749s 00:10:24.524 15:56:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:24.524 15:56:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:24.783 15:56:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:10:24.783 15:56:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:24.783 15:56:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:10:24.783 15:56:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:10:24.783 15:56:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:10:24.783 15:56:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:24.783 15:56:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:10:24.783 15:56:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:10:24.783 15:56:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:10:24.783 15:56:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:24.783 nvmf_trace.0 00:10:24.783 15:56:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:10:24.783 15:56:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:24.783 15:56:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:24.783 15:56:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:10:25.042 15:56:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:25.042 15:56:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:10:25.042 15:56:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:25.042 15:56:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:25.042 rmmod nvme_tcp 00:10:25.042 rmmod nvme_fabrics 00:10:25.042 rmmod nvme_keyring 00:10:25.042 15:56:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:25.042 15:56:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:10:25.042 15:56:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:10:25.042 15:56:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 74678 ']' 00:10:25.042 15:56:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 74678 00:10:25.042 15:56:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 74678 ']' 00:10:25.042 15:56:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 74678 00:10:25.042 15:56:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:10:25.042 15:56:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:25.042 15:56:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74678 00:10:25.042 killing process with pid 74678 00:10:25.042 15:56:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:25.042 15:56:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:25.042 15:56:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74678' 00:10:25.042 15:56:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 74678 00:10:25.042 15:56:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 74678 00:10:25.300 15:56:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:25.300 15:56:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:25.300 15:56:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:25.301 15:56:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:25.301 15:56:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:25.301 15:56:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.301 15:56:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:25.301 15:56:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.301 15:56:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:25.301 00:10:25.301 real 0m43.069s 00:10:25.301 user 1m9.720s 00:10:25.301 sys 0m11.832s 00:10:25.301 15:56:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:25.301 15:56:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:25.301 ************************************ 00:10:25.301 END TEST nvmf_lvs_grow 00:10:25.301 ************************************ 00:10:25.301 15:56:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:25.301 15:56:18 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:25.301 15:56:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:25.301 15:56:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:25.301 15:56:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:25.301 ************************************ 00:10:25.301 START TEST nvmf_bdev_io_wait 00:10:25.301 ************************************ 00:10:25.301 15:56:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:25.560 * Looking for test storage... 00:10:25.560 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:25.560 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:25.561 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:25.561 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:25.561 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:25.561 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:25.561 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:25.561 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:25.561 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:25.561 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:25.561 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:25.561 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:25.561 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:25.561 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:25.561 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:25.561 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:25.561 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:25.561 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:25.561 Cannot find device "nvmf_tgt_br" 00:10:25.561 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:10:25.561 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:25.561 Cannot find device "nvmf_tgt_br2" 00:10:25.561 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:10:25.561 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:25.561 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:25.561 Cannot find device "nvmf_tgt_br" 00:10:25.561 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:10:25.561 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:25.561 Cannot find device "nvmf_tgt_br2" 00:10:25.561 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:10:25.561 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:25.561 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:25.561 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:25.561 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:25.561 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:10:25.561 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:25.561 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:25.561 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:10:25.561 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:25.561 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:25.561 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:25.561 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:25.561 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:25.561 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:25.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:25.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:10:25.820 00:10:25.820 --- 10.0.0.2 ping statistics --- 00:10:25.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.820 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:25.820 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:25.820 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:10:25.820 00:10:25.820 --- 10.0.0.3 ping statistics --- 00:10:25.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.820 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:25.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:25.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:10:25.820 00:10:25.820 --- 10.0.0.1 ping statistics --- 00:10:25.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.820 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=75098 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 75098 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 75098 ']' 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:25.820 15:56:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:25.820 [2024-07-15 15:56:19.541355] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:10:25.820 [2024-07-15 15:56:19.541925] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:26.079 [2024-07-15 15:56:19.682428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:26.338 [2024-07-15 15:56:19.808334] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:26.338 [2024-07-15 15:56:19.808407] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:26.338 [2024-07-15 15:56:19.808423] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:26.338 [2024-07-15 15:56:19.808433] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:26.338 [2024-07-15 15:56:19.808442] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:26.338 [2024-07-15 15:56:19.808595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:26.338 [2024-07-15 15:56:19.809255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:26.338 [2024-07-15 15:56:19.809408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:26.338 [2024-07-15 15:56:19.809481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.906 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:26.906 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:10:26.906 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:26.906 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:26.906 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:26.906 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:26.906 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:26.906 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.906 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:26.906 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.906 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:26.906 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.906 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:27.165 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.165 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:27.165 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.165 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:27.165 [2024-07-15 15:56:20.677132] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:27.166 Malloc0 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:27.166 [2024-07-15 15:56:20.743847] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=75151 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=75153 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:27.166 { 00:10:27.166 "params": { 00:10:27.166 "name": "Nvme$subsystem", 00:10:27.166 "trtype": "$TEST_TRANSPORT", 00:10:27.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:27.166 "adrfam": "ipv4", 00:10:27.166 "trsvcid": "$NVMF_PORT", 00:10:27.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:27.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:27.166 "hdgst": ${hdgst:-false}, 00:10:27.166 "ddgst": ${ddgst:-false} 00:10:27.166 }, 00:10:27.166 "method": "bdev_nvme_attach_controller" 00:10:27.166 } 00:10:27.166 EOF 00:10:27.166 )") 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=75155 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:27.166 { 00:10:27.166 "params": { 00:10:27.166 "name": "Nvme$subsystem", 00:10:27.166 "trtype": "$TEST_TRANSPORT", 00:10:27.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:27.166 "adrfam": "ipv4", 00:10:27.166 "trsvcid": "$NVMF_PORT", 00:10:27.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:27.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:27.166 "hdgst": ${hdgst:-false}, 00:10:27.166 "ddgst": ${ddgst:-false} 00:10:27.166 }, 00:10:27.166 "method": "bdev_nvme_attach_controller" 00:10:27.166 } 00:10:27.166 EOF 00:10:27.166 )") 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=75158 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:27.166 { 00:10:27.166 "params": { 00:10:27.166 "name": "Nvme$subsystem", 00:10:27.166 "trtype": "$TEST_TRANSPORT", 00:10:27.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:27.166 "adrfam": "ipv4", 00:10:27.166 "trsvcid": "$NVMF_PORT", 00:10:27.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:27.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:27.166 "hdgst": ${hdgst:-false}, 00:10:27.166 "ddgst": ${ddgst:-false} 00:10:27.166 }, 00:10:27.166 "method": "bdev_nvme_attach_controller" 00:10:27.166 } 00:10:27.166 EOF 00:10:27.166 )") 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:27.166 { 00:10:27.166 "params": { 00:10:27.166 "name": "Nvme$subsystem", 00:10:27.166 "trtype": "$TEST_TRANSPORT", 00:10:27.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:27.166 "adrfam": "ipv4", 00:10:27.166 "trsvcid": "$NVMF_PORT", 00:10:27.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:27.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:27.166 "hdgst": ${hdgst:-false}, 00:10:27.166 "ddgst": ${ddgst:-false} 00:10:27.166 }, 00:10:27.166 "method": "bdev_nvme_attach_controller" 00:10:27.166 } 00:10:27.166 EOF 00:10:27.166 )") 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:27.166 "params": { 00:10:27.166 "name": "Nvme1", 00:10:27.166 "trtype": "tcp", 00:10:27.166 "traddr": "10.0.0.2", 00:10:27.166 "adrfam": "ipv4", 00:10:27.166 "trsvcid": "4420", 00:10:27.166 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:27.166 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:27.166 "hdgst": false, 00:10:27.166 "ddgst": false 00:10:27.166 }, 00:10:27.166 "method": "bdev_nvme_attach_controller" 00:10:27.166 }' 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:27.166 "params": { 00:10:27.166 "name": "Nvme1", 00:10:27.166 "trtype": "tcp", 00:10:27.166 "traddr": "10.0.0.2", 00:10:27.166 "adrfam": "ipv4", 00:10:27.166 "trsvcid": "4420", 00:10:27.166 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:27.166 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:27.166 "hdgst": false, 00:10:27.166 "ddgst": false 00:10:27.166 }, 00:10:27.166 "method": "bdev_nvme_attach_controller" 00:10:27.166 }' 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:27.166 "params": { 00:10:27.166 "name": "Nvme1", 00:10:27.166 "trtype": "tcp", 00:10:27.166 "traddr": "10.0.0.2", 00:10:27.166 "adrfam": "ipv4", 00:10:27.166 "trsvcid": "4420", 00:10:27.166 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:27.166 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:27.166 "hdgst": false, 00:10:27.166 "ddgst": false 00:10:27.166 }, 00:10:27.166 "method": "bdev_nvme_attach_controller" 00:10:27.166 }' 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:27.166 "params": { 00:10:27.166 "name": "Nvme1", 00:10:27.166 "trtype": "tcp", 00:10:27.166 "traddr": "10.0.0.2", 00:10:27.166 "adrfam": "ipv4", 00:10:27.166 "trsvcid": "4420", 00:10:27.166 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:27.166 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:27.166 "hdgst": false, 00:10:27.166 "ddgst": false 00:10:27.166 }, 00:10:27.166 "method": "bdev_nvme_attach_controller" 00:10:27.166 }' 00:10:27.166 [2024-07-15 15:56:20.807625] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:10:27.166 [2024-07-15 15:56:20.807704] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:27.166 15:56:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 75151 00:10:27.166 [2024-07-15 15:56:20.832289] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:10:27.166 [2024-07-15 15:56:20.832533] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:27.166 [2024-07-15 15:56:20.840803] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:10:27.167 [2024-07-15 15:56:20.840884] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:27.167 [2024-07-15 15:56:20.843360] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:10:27.167 [2024-07-15 15:56:20.843437] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:27.423 [2024-07-15 15:56:21.009755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.423 [2024-07-15 15:56:21.089581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.423 [2024-07-15 15:56:21.116514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:27.681 [2024-07-15 15:56:21.174499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.681 [2024-07-15 15:56:21.191498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:27.681 [2024-07-15 15:56:21.252494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.681 Running I/O for 1 seconds... 00:10:27.681 [2024-07-15 15:56:21.275184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:27.681 Running I/O for 1 seconds... 00:10:27.681 [2024-07-15 15:56:21.354444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:10:27.939 Running I/O for 1 seconds... 00:10:27.939 Running I/O for 1 seconds... 00:10:28.874 00:10:28.874 Latency(us) 00:10:28.874 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:28.874 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:28.874 Nvme1n1 : 1.02 7357.63 28.74 0.00 0.00 17161.18 6553.60 30384.87 00:10:28.874 =================================================================================================================== 00:10:28.874 Total : 7357.63 28.74 0.00 0.00 17161.18 6553.60 30384.87 00:10:28.874 00:10:28.874 Latency(us) 00:10:28.874 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:28.874 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:28.874 Nvme1n1 : 1.00 196224.18 766.50 0.00 0.00 649.64 284.86 774.52 00:10:28.874 =================================================================================================================== 00:10:28.874 Total : 196224.18 766.50 0.00 0.00 649.64 284.86 774.52 00:10:28.874 00:10:28.874 Latency(us) 00:10:28.874 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:28.874 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:28.874 Nvme1n1 : 1.01 7772.21 30.36 0.00 0.00 16374.37 4587.52 21924.77 00:10:28.874 =================================================================================================================== 00:10:28.874 Total : 7772.21 30.36 0.00 0.00 16374.37 4587.52 21924.77 00:10:28.874 00:10:28.874 Latency(us) 00:10:28.874 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:28.874 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:28.874 Nvme1n1 : 1.00 7451.99 29.11 0.00 0.00 17125.88 4974.78 44564.48 00:10:28.874 =================================================================================================================== 00:10:28.874 Total : 7451.99 29.11 0.00 0.00 17125.88 4974.78 44564.48 00:10:28.874 15:56:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 75153 00:10:29.133 15:56:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 75155 00:10:29.133 15:56:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 75158 00:10:29.133 15:56:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:29.133 15:56:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.133 15:56:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:29.133 15:56:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.133 15:56:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:29.133 15:56:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:29.133 15:56:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:29.133 15:56:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:10:29.391 15:56:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:29.391 15:56:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:10:29.391 15:56:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:29.391 15:56:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:29.391 rmmod nvme_tcp 00:10:29.391 rmmod nvme_fabrics 00:10:29.391 rmmod nvme_keyring 00:10:29.391 15:56:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:29.391 15:56:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:10:29.391 15:56:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:10:29.391 15:56:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 75098 ']' 00:10:29.391 15:56:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 75098 00:10:29.391 15:56:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 75098 ']' 00:10:29.391 15:56:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 75098 00:10:29.391 15:56:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:10:29.391 15:56:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:29.391 15:56:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75098 00:10:29.391 15:56:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:29.391 15:56:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:29.391 killing process with pid 75098 00:10:29.391 15:56:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75098' 00:10:29.391 15:56:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 75098 00:10:29.391 15:56:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 75098 00:10:29.649 15:56:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:29.649 15:56:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:29.649 15:56:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:29.649 15:56:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:29.649 15:56:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:29.649 15:56:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.650 15:56:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:29.650 15:56:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.650 15:56:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:29.650 00:10:29.650 real 0m4.202s 00:10:29.650 user 0m18.486s 00:10:29.650 sys 0m2.069s 00:10:29.650 15:56:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:29.650 15:56:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:29.650 ************************************ 00:10:29.650 END TEST nvmf_bdev_io_wait 00:10:29.650 ************************************ 00:10:29.650 15:56:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:29.650 15:56:23 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:29.650 15:56:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:29.650 15:56:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:29.650 15:56:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:29.650 ************************************ 00:10:29.650 START TEST nvmf_queue_depth 00:10:29.650 ************************************ 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:29.650 * Looking for test storage... 00:10:29.650 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:29.650 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:29.908 Cannot find device "nvmf_tgt_br" 00:10:29.908 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:10:29.908 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:29.908 Cannot find device "nvmf_tgt_br2" 00:10:29.908 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:10:29.908 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:29.908 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:29.908 Cannot find device "nvmf_tgt_br" 00:10:29.908 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:10:29.908 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:29.908 Cannot find device "nvmf_tgt_br2" 00:10:29.908 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:10:29.908 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:29.908 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:29.908 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:29.908 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:29.908 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:10:29.908 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:29.908 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:29.908 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:10:29.908 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:29.908 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:29.908 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:29.908 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:29.908 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:29.908 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:29.908 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:29.908 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:29.908 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:29.908 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:29.908 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:29.908 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:29.908 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:29.908 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:29.908 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:29.908 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:29.908 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:29.908 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:29.908 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:30.167 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:30.167 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:30.167 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:30.167 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:30.167 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:30.167 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:30.167 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:10:30.167 00:10:30.167 --- 10.0.0.2 ping statistics --- 00:10:30.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.167 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:10:30.167 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:30.167 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:30.167 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:10:30.167 00:10:30.167 --- 10.0.0.3 ping statistics --- 00:10:30.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.167 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:10:30.167 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:30.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:30.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:10:30.167 00:10:30.167 --- 10.0.0.1 ping statistics --- 00:10:30.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.167 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:10:30.167 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:30.167 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:10:30.167 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:30.167 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:30.167 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:30.167 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:30.167 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:30.167 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:30.167 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:30.167 15:56:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:30.167 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:30.167 15:56:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:30.167 15:56:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:30.167 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=75389 00:10:30.167 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 75389 00:10:30.167 15:56:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 75389 ']' 00:10:30.167 15:56:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.167 15:56:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:30.167 15:56:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:30.167 15:56:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.167 15:56:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:30.167 15:56:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:30.167 [2024-07-15 15:56:23.769890] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:10:30.167 [2024-07-15 15:56:23.769996] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.426 [2024-07-15 15:56:23.911782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.426 [2024-07-15 15:56:24.018842] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:30.426 [2024-07-15 15:56:24.018903] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:30.426 [2024-07-15 15:56:24.018913] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:30.426 [2024-07-15 15:56:24.018936] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:30.426 [2024-07-15 15:56:24.018944] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:30.426 [2024-07-15 15:56:24.018972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.360 15:56:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:31.360 15:56:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:10:31.360 15:56:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:31.360 15:56:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:31.360 15:56:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:31.360 15:56:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:31.360 15:56:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:31.360 15:56:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.360 15:56:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:31.360 [2024-07-15 15:56:24.827819] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:31.360 15:56:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.360 15:56:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:31.360 15:56:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.360 15:56:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:31.360 Malloc0 00:10:31.360 15:56:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.360 15:56:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:31.360 15:56:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.360 15:56:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:31.360 15:56:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.360 15:56:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:31.360 15:56:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.360 15:56:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:31.360 15:56:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.360 15:56:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:31.360 15:56:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.360 15:56:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:31.360 [2024-07-15 15:56:24.890686] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:31.360 15:56:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.360 15:56:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=75439 00:10:31.360 15:56:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:31.360 15:56:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:31.360 15:56:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 75439 /var/tmp/bdevperf.sock 00:10:31.360 15:56:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 75439 ']' 00:10:31.360 15:56:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:31.360 15:56:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:31.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:31.360 15:56:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:31.360 15:56:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:31.360 15:56:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:31.360 [2024-07-15 15:56:24.943442] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:10:31.360 [2024-07-15 15:56:24.943526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75439 ] 00:10:31.360 [2024-07-15 15:56:25.080633] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.619 [2024-07-15 15:56:25.182203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.555 15:56:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:32.555 15:56:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:10:32.555 15:56:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:32.555 15:56:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.555 15:56:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:32.555 NVMe0n1 00:10:32.555 15:56:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.555 15:56:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:32.555 Running I/O for 10 seconds... 00:10:42.548 00:10:42.548 Latency(us) 00:10:42.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:42.549 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:42.549 Verification LBA range: start 0x0 length 0x4000 00:10:42.549 NVMe0n1 : 10.08 9026.42 35.26 0.00 0.00 112973.71 26571.87 113913.48 00:10:42.549 =================================================================================================================== 00:10:42.549 Total : 9026.42 35.26 0.00 0.00 112973.71 26571.87 113913.48 00:10:42.549 0 00:10:42.549 15:56:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 75439 00:10:42.549 15:56:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 75439 ']' 00:10:42.549 15:56:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 75439 00:10:42.549 15:56:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:10:42.807 15:56:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:42.807 15:56:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75439 00:10:42.807 15:56:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:42.807 15:56:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:42.807 killing process with pid 75439 00:10:42.807 15:56:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75439' 00:10:42.807 15:56:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 75439 00:10:42.807 Received shutdown signal, test time was about 10.000000 seconds 00:10:42.807 00:10:42.807 Latency(us) 00:10:42.807 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:42.807 =================================================================================================================== 00:10:42.807 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:42.807 15:56:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 75439 00:10:43.066 15:56:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:43.066 15:56:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:43.066 15:56:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:43.066 15:56:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:10:43.066 15:56:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:43.066 15:56:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:10:43.066 15:56:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:43.066 15:56:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:43.066 rmmod nvme_tcp 00:10:43.066 rmmod nvme_fabrics 00:10:43.066 rmmod nvme_keyring 00:10:43.066 15:56:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:43.066 15:56:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:10:43.066 15:56:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:10:43.066 15:56:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 75389 ']' 00:10:43.066 15:56:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 75389 00:10:43.066 15:56:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 75389 ']' 00:10:43.066 15:56:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 75389 00:10:43.066 15:56:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:10:43.066 15:56:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:43.066 15:56:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75389 00:10:43.066 killing process with pid 75389 00:10:43.066 15:56:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:43.066 15:56:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:43.066 15:56:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75389' 00:10:43.066 15:56:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 75389 00:10:43.066 15:56:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 75389 00:10:43.325 15:56:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:43.325 15:56:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:43.325 15:56:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:43.325 15:56:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:43.325 15:56:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:43.325 15:56:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.325 15:56:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:43.325 15:56:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.325 15:56:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:43.325 00:10:43.325 real 0m13.742s 00:10:43.325 user 0m23.703s 00:10:43.325 sys 0m2.098s 00:10:43.325 15:56:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:43.325 15:56:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:43.325 ************************************ 00:10:43.325 END TEST nvmf_queue_depth 00:10:43.325 ************************************ 00:10:43.325 15:56:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:43.325 15:56:37 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:43.325 15:56:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:43.325 15:56:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:43.325 15:56:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:43.325 ************************************ 00:10:43.325 START TEST nvmf_target_multipath 00:10:43.325 ************************************ 00:10:43.325 15:56:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:43.584 * Looking for test storage... 00:10:43.584 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:43.584 15:56:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:43.584 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:43.584 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.584 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.584 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.584 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.584 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.584 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.584 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:43.585 Cannot find device "nvmf_tgt_br" 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:43.585 Cannot find device "nvmf_tgt_br2" 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:43.585 Cannot find device "nvmf_tgt_br" 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:43.585 Cannot find device "nvmf_tgt_br2" 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:43.585 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:43.585 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:43.585 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:43.843 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:43.843 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:43.843 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:43.843 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:43.843 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:43.843 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:43.843 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:43.843 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:43.843 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:43.843 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:43.843 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:43.843 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:43.844 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:43.844 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:43.844 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:43.844 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:43.844 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:43.844 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:43.844 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:43.844 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:43.844 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:43.844 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:43.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:43.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:10:43.844 00:10:43.844 --- 10.0.0.2 ping statistics --- 00:10:43.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.844 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:10:43.844 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:43.844 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:43.844 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:10:43.844 00:10:43.844 --- 10.0.0.3 ping statistics --- 00:10:43.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.844 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:10:43.844 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:43.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:43.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:10:43.844 00:10:43.844 --- 10.0.0.1 ping statistics --- 00:10:43.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.844 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:10:43.844 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:43.844 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:10:43.844 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:43.844 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:43.844 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:43.844 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:43.844 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:43.844 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:43.844 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:43.844 15:56:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:10:43.844 15:56:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:10:43.844 15:56:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:10:43.844 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:43.844 15:56:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:43.844 15:56:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:43.844 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=75774 00:10:43.844 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:43.844 15:56:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 75774 00:10:43.844 15:56:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 75774 ']' 00:10:43.844 15:56:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.844 15:56:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:43.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.844 15:56:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.844 15:56:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:43.844 15:56:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:44.102 [2024-07-15 15:56:37.576789] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:10:44.102 [2024-07-15 15:56:37.576892] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:44.102 [2024-07-15 15:56:37.718862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:44.360 [2024-07-15 15:56:37.849576] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:44.360 [2024-07-15 15:56:37.849663] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:44.360 [2024-07-15 15:56:37.849690] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:44.360 [2024-07-15 15:56:37.849708] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:44.360 [2024-07-15 15:56:37.849717] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:44.360 [2024-07-15 15:56:37.849928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.360 [2024-07-15 15:56:37.850606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:44.360 [2024-07-15 15:56:37.850790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:44.360 [2024-07-15 15:56:37.850798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.927 15:56:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:44.927 15:56:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:10:44.927 15:56:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:44.927 15:56:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:44.927 15:56:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:44.927 15:56:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:44.927 15:56:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:45.185 [2024-07-15 15:56:38.805709] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:45.185 15:56:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:10:45.443 Malloc0 00:10:45.443 15:56:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:10:46.010 15:56:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:46.010 15:56:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:46.268 [2024-07-15 15:56:39.913737] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:46.268 15:56:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:46.527 [2024-07-15 15:56:40.146073] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:46.527 15:56:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid=a185c444-aaeb-4d13-aa60-df1b0266600d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:10:46.837 15:56:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid=a185c444-aaeb-4d13-aa60-df1b0266600d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:10:47.096 15:56:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:10:47.096 15:56:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:10:47.096 15:56:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:47.096 15:56:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:47.096 15:56:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:10:48.997 15:56:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:48.997 15:56:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:48.997 15:56:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:48.997 15:56:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:48.997 15:56:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:48.997 15:56:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:10:48.997 15:56:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:10:48.997 15:56:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:10:48.997 15:56:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:10:48.997 15:56:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:48.997 15:56:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:10:48.997 15:56:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:10:48.997 15:56:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:10:48.997 15:56:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:10:48.997 15:56:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:10:48.997 15:56:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:10:48.997 15:56:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:10:48.997 15:56:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:10:48.998 15:56:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:10:48.998 15:56:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:10:48.998 15:56:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:48.998 15:56:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:48.998 15:56:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:48.998 15:56:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:48.998 15:56:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:48.998 15:56:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:10:48.998 15:56:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:48.998 15:56:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:48.998 15:56:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:48.998 15:56:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:48.998 15:56:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:48.998 15:56:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:10:48.998 15:56:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=75914 00:10:48.998 15:56:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:48.998 15:56:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:10:48.998 [global] 00:10:48.998 thread=1 00:10:48.998 invalidate=1 00:10:48.998 rw=randrw 00:10:48.998 time_based=1 00:10:48.998 runtime=6 00:10:48.998 ioengine=libaio 00:10:48.998 direct=1 00:10:48.998 bs=4096 00:10:48.998 iodepth=128 00:10:48.998 norandommap=0 00:10:48.998 numjobs=1 00:10:48.998 00:10:48.998 verify_dump=1 00:10:48.998 verify_backlog=512 00:10:48.998 verify_state_save=0 00:10:48.998 do_verify=1 00:10:48.998 verify=crc32c-intel 00:10:48.998 [job0] 00:10:48.998 filename=/dev/nvme0n1 00:10:48.998 Could not set queue depth (nvme0n1) 00:10:49.256 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:49.256 fio-3.35 00:10:49.256 Starting 1 thread 00:10:50.190 15:56:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:50.448 15:56:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:50.707 15:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:10:50.707 15:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:50.707 15:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:50.707 15:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:50.707 15:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:50.707 15:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:50.707 15:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:10:50.707 15:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:50.707 15:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:50.707 15:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:50.707 15:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:50.707 15:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:50.707 15:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:51.666 15:56:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:51.666 15:56:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:51.666 15:56:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:51.666 15:56:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:51.925 15:56:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:52.183 15:56:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:10:52.183 15:56:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:52.183 15:56:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:52.183 15:56:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:52.183 15:56:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:52.183 15:56:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:52.183 15:56:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:10:52.183 15:56:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:52.183 15:56:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:52.183 15:56:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:52.183 15:56:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:52.183 15:56:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:52.183 15:56:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:53.131 15:56:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:53.131 15:56:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:53.131 15:56:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:53.131 15:56:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 75914 00:10:55.661 00:10:55.661 job0: (groupid=0, jobs=1): err= 0: pid=75936: Mon Jul 15 15:56:48 2024 00:10:55.661 read: IOPS=10.9k, BW=42.6MiB/s (44.6MB/s)(256MiB/6006msec) 00:10:55.661 slat (usec): min=2, max=5808, avg=51.96, stdev=236.70 00:10:55.661 clat (usec): min=351, max=14326, avg=8006.82, stdev=1211.43 00:10:55.661 lat (usec): min=436, max=14354, avg=8058.78, stdev=1221.82 00:10:55.661 clat percentiles (usec): 00:10:55.661 | 1.00th=[ 4817], 5.00th=[ 6128], 10.00th=[ 6915], 20.00th=[ 7242], 00:10:55.661 | 30.00th=[ 7439], 40.00th=[ 7635], 50.00th=[ 7832], 60.00th=[ 8160], 00:10:55.661 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9372], 95.00th=[ 9896], 00:10:55.661 | 99.00th=[11863], 99.50th=[12256], 99.90th=[13042], 99.95th=[13304], 00:10:55.661 | 99.99th=[14091] 00:10:55.661 bw ( KiB/s): min=10960, max=27313, per=52.47%, avg=22869.82, stdev=5086.52, samples=11 00:10:55.661 iops : min= 2740, max= 6828, avg=5717.36, stdev=1271.56, samples=11 00:10:55.661 write: IOPS=6519, BW=25.5MiB/s (26.7MB/s)(135MiB/5289msec); 0 zone resets 00:10:55.661 slat (usec): min=4, max=4021, avg=65.28, stdev=163.70 00:10:55.661 clat (usec): min=716, max=13778, avg=6895.27, stdev=1030.71 00:10:55.661 lat (usec): min=781, max=13826, avg=6960.55, stdev=1034.28 00:10:55.661 clat percentiles (usec): 00:10:55.661 | 1.00th=[ 3785], 5.00th=[ 4948], 10.00th=[ 5866], 20.00th=[ 6325], 00:10:55.661 | 30.00th=[ 6587], 40.00th=[ 6783], 50.00th=[ 6980], 60.00th=[ 7177], 00:10:55.661 | 70.00th=[ 7308], 80.00th=[ 7570], 90.00th=[ 7832], 95.00th=[ 8160], 00:10:55.661 | 99.00th=[10028], 99.50th=[10683], 99.90th=[12387], 99.95th=[12649], 00:10:55.661 | 99.99th=[13304] 00:10:55.661 bw ( KiB/s): min=11624, max=27289, per=87.85%, avg=22907.45, stdev=4811.26, samples=11 00:10:55.661 iops : min= 2906, max= 6822, avg=5726.73, stdev=1202.82, samples=11 00:10:55.661 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:10:55.661 lat (msec) : 2=0.08%, 4=0.61%, 10=95.83%, 20=3.47% 00:10:55.661 cpu : usr=5.38%, sys=23.45%, ctx=6544, majf=0, minf=114 00:10:55.661 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:55.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:55.661 issued rwts: total=65446,34479,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.661 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:55.661 00:10:55.661 Run status group 0 (all jobs): 00:10:55.661 READ: bw=42.6MiB/s (44.6MB/s), 42.6MiB/s-42.6MiB/s (44.6MB/s-44.6MB/s), io=256MiB (268MB), run=6006-6006msec 00:10:55.661 WRITE: bw=25.5MiB/s (26.7MB/s), 25.5MiB/s-25.5MiB/s (26.7MB/s-26.7MB/s), io=135MiB (141MB), run=5289-5289msec 00:10:55.661 00:10:55.661 Disk stats (read/write): 00:10:55.661 nvme0n1: ios=64790/33635, merge=0/0, ticks=483794/215501, in_queue=699295, util=98.61% 00:10:55.661 15:56:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:10:55.661 15:56:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:10:55.920 15:56:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:10:55.920 15:56:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:55.920 15:56:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:55.920 15:56:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:55.920 15:56:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:55.920 15:56:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:55.920 15:56:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:10:55.920 15:56:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:55.920 15:56:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:55.920 15:56:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:55.920 15:56:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:55.920 15:56:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:10:55.920 15:56:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:56.855 15:56:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:56.855 15:56:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:56.855 15:56:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:56.855 15:56:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:10:56.855 15:56:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=76070 00:10:56.855 15:56:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:56.855 15:56:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:10:56.855 [global] 00:10:56.855 thread=1 00:10:56.855 invalidate=1 00:10:56.855 rw=randrw 00:10:56.855 time_based=1 00:10:56.855 runtime=6 00:10:56.855 ioengine=libaio 00:10:56.855 direct=1 00:10:56.855 bs=4096 00:10:56.855 iodepth=128 00:10:56.855 norandommap=0 00:10:56.855 numjobs=1 00:10:56.855 00:10:57.113 verify_dump=1 00:10:57.113 verify_backlog=512 00:10:57.113 verify_state_save=0 00:10:57.113 do_verify=1 00:10:57.113 verify=crc32c-intel 00:10:57.113 [job0] 00:10:57.113 filename=/dev/nvme0n1 00:10:57.113 Could not set queue depth (nvme0n1) 00:10:57.113 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:57.113 fio-3.35 00:10:57.113 Starting 1 thread 00:10:58.043 15:56:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:58.301 15:56:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:58.568 15:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:58.568 15:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:58.568 15:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:58.568 15:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:58.568 15:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:58.568 15:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:58.568 15:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:58.568 15:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:58.568 15:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:58.569 15:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:58.569 15:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:58.569 15:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:58.569 15:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:59.521 15:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:59.521 15:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:59.521 15:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:59.521 15:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:59.779 15:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:00.037 15:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:11:00.037 15:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:00.037 15:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:00.037 15:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:00.037 15:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:00.037 15:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:00.037 15:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:11:00.037 15:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:00.037 15:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:00.037 15:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:00.037 15:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:00.037 15:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:00.037 15:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:01.413 15:56:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:01.413 15:56:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:01.413 15:56:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:01.413 15:56:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 76070 00:11:03.316 00:11:03.316 job0: (groupid=0, jobs=1): err= 0: pid=76091: Mon Jul 15 15:56:56 2024 00:11:03.316 read: IOPS=12.0k, BW=46.9MiB/s (49.2MB/s)(281MiB/6004msec) 00:11:03.316 slat (usec): min=4, max=5537, avg=42.88, stdev=204.15 00:11:03.316 clat (usec): min=204, max=45979, avg=7338.31, stdev=1809.89 00:11:03.316 lat (usec): min=224, max=46005, avg=7381.19, stdev=1825.51 00:11:03.316 clat percentiles (usec): 00:11:03.316 | 1.00th=[ 2343], 5.00th=[ 3687], 10.00th=[ 4686], 20.00th=[ 6259], 00:11:03.316 | 30.00th=[ 7111], 40.00th=[ 7373], 50.00th=[ 7570], 60.00th=[ 7832], 00:11:03.316 | 70.00th=[ 8160], 80.00th=[ 8586], 90.00th=[ 9110], 95.00th=[ 9634], 00:11:03.316 | 99.00th=[11469], 99.50th=[11863], 99.90th=[12780], 99.95th=[13173], 00:11:03.316 | 99.99th=[45351] 00:11:03.316 bw ( KiB/s): min= 8056, max=41376, per=53.28%, avg=25579.55, stdev=10733.33, samples=11 00:11:03.316 iops : min= 2014, max=10344, avg=6394.82, stdev=2683.23, samples=11 00:11:03.316 write: IOPS=7340, BW=28.7MiB/s (30.1MB/s)(150MiB/5232msec); 0 zone resets 00:11:03.316 slat (usec): min=12, max=3867, avg=52.08, stdev=128.39 00:11:03.316 clat (usec): min=199, max=12902, avg=6075.85, stdev=1687.00 00:11:03.316 lat (usec): min=305, max=12943, avg=6127.93, stdev=1699.11 00:11:03.316 clat percentiles (usec): 00:11:03.316 | 1.00th=[ 1991], 5.00th=[ 2737], 10.00th=[ 3326], 20.00th=[ 4490], 00:11:03.316 | 30.00th=[ 5604], 40.00th=[ 6259], 50.00th=[ 6587], 60.00th=[ 6849], 00:11:03.316 | 70.00th=[ 7111], 80.00th=[ 7373], 90.00th=[ 7701], 95.00th=[ 7963], 00:11:03.316 | 99.00th=[ 9503], 99.50th=[10290], 99.90th=[11994], 99.95th=[12387], 00:11:03.316 | 99.99th=[12780] 00:11:03.316 bw ( KiB/s): min= 8288, max=40878, per=87.19%, avg=25600.55, stdev=10477.99, samples=11 00:11:03.316 iops : min= 2072, max=10219, avg=6400.09, stdev=2619.43, samples=11 00:11:03.316 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.02% 00:11:03.316 lat (msec) : 2=0.61%, 4=8.96%, 10=87.88%, 20=2.49%, 50=0.01% 00:11:03.316 cpu : usr=6.20%, sys=25.60%, ctx=7362, majf=0, minf=133 00:11:03.316 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:11:03.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:03.316 issued rwts: total=72059,38405,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.316 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:03.316 00:11:03.316 Run status group 0 (all jobs): 00:11:03.316 READ: bw=46.9MiB/s (49.2MB/s), 46.9MiB/s-46.9MiB/s (49.2MB/s-49.2MB/s), io=281MiB (295MB), run=6004-6004msec 00:11:03.316 WRITE: bw=28.7MiB/s (30.1MB/s), 28.7MiB/s-28.7MiB/s (30.1MB/s-30.1MB/s), io=150MiB (157MB), run=5232-5232msec 00:11:03.316 00:11:03.316 Disk stats (read/write): 00:11:03.316 nvme0n1: ios=71041/37893, merge=0/0, ticks=485629/212173, in_queue=697802, util=98.67% 00:11:03.316 15:56:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:03.316 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:03.316 15:56:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:03.316 15:56:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:11:03.316 15:56:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:03.316 15:56:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:03.316 15:56:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:03.316 15:56:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:03.316 15:56:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:11:03.316 15:56:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:03.884 15:56:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:11:03.884 15:56:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:11:03.884 15:56:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:11:03.884 15:56:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:11:03.884 15:56:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:03.884 15:56:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:11:03.884 15:56:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:03.884 15:56:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:11:03.884 15:56:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:03.884 15:56:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:03.884 rmmod nvme_tcp 00:11:03.884 rmmod nvme_fabrics 00:11:03.884 rmmod nvme_keyring 00:11:03.884 15:56:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:03.884 15:56:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:11:03.884 15:56:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:11:03.884 15:56:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 75774 ']' 00:11:03.884 15:56:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 75774 00:11:03.884 15:56:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 75774 ']' 00:11:03.884 15:56:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 75774 00:11:03.884 15:56:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:11:03.884 15:56:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:03.884 15:56:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75774 00:11:03.884 killing process with pid 75774 00:11:03.884 15:56:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:03.884 15:56:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:03.884 15:56:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75774' 00:11:03.884 15:56:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 75774 00:11:03.884 15:56:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 75774 00:11:04.142 15:56:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:04.142 15:56:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:04.142 15:56:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:04.142 15:56:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:04.142 15:56:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:04.142 15:56:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.142 15:56:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:04.142 15:56:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.142 15:56:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:04.142 ************************************ 00:11:04.142 END TEST nvmf_target_multipath 00:11:04.142 ************************************ 00:11:04.142 00:11:04.142 real 0m20.680s 00:11:04.142 user 1m20.768s 00:11:04.142 sys 0m7.044s 00:11:04.143 15:56:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:04.143 15:56:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:04.143 15:56:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:04.143 15:56:57 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:04.143 15:56:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:04.143 15:56:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:04.143 15:56:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:04.143 ************************************ 00:11:04.143 START TEST nvmf_zcopy 00:11:04.143 ************************************ 00:11:04.143 15:56:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:04.143 * Looking for test storage... 00:11:04.143 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:04.143 15:56:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:04.143 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:04.143 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:04.143 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:04.143 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:04.143 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:04.143 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:04.143 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:04.143 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:04.143 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:04.143 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:04.143 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:04.143 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:11:04.143 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:11:04.143 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:04.143 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:04.143 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:04.143 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:04.143 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:04.143 15:56:57 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:04.143 15:56:57 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:04.143 15:56:57 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:04.143 15:56:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.401 15:56:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.401 15:56:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:04.402 Cannot find device "nvmf_tgt_br" 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:04.402 Cannot find device "nvmf_tgt_br2" 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:04.402 Cannot find device "nvmf_tgt_br" 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:04.402 Cannot find device "nvmf_tgt_br2" 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:04.402 15:56:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:04.402 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:04.402 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:04.402 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:11:04.402 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:04.402 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:04.402 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:11:04.402 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:04.402 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:04.402 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:04.402 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:04.402 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:04.402 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:04.402 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:04.402 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:04.402 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:04.402 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:04.402 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:04.661 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:04.661 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:04.661 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:04.661 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:04.661 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:04.661 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:04.661 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:04.661 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:04.661 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:04.661 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:04.661 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:04.661 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:04.661 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:04.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:04.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:11:04.661 00:11:04.661 --- 10.0.0.2 ping statistics --- 00:11:04.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.661 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:11:04.661 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:04.661 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:04.661 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:11:04.661 00:11:04.661 --- 10.0.0.3 ping statistics --- 00:11:04.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.661 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:11:04.661 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:04.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:04.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:11:04.661 00:11:04.661 --- 10.0.0.1 ping statistics --- 00:11:04.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.661 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:11:04.661 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:04.661 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:11:04.661 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:04.661 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:04.661 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:04.661 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:04.662 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:04.662 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:04.662 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:04.662 15:56:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:04.662 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:04.662 15:56:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:04.662 15:56:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:04.662 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=76380 00:11:04.662 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 76380 00:11:04.662 15:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:04.662 15:56:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 76380 ']' 00:11:04.662 15:56:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.662 15:56:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:04.662 15:56:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.662 15:56:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:04.662 15:56:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:04.662 [2024-07-15 15:56:58.310908] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:11:04.662 [2024-07-15 15:56:58.311019] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.920 [2024-07-15 15:56:58.447908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.920 [2024-07-15 15:56:58.553865] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.920 [2024-07-15 15:56:58.553943] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.920 [2024-07-15 15:56:58.553981] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:04.920 [2024-07-15 15:56:58.553993] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:04.920 [2024-07-15 15:56:58.554002] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.920 [2024-07-15 15:56:58.554029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:05.856 [2024-07-15 15:56:59.383377] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:05.856 [2024-07-15 15:56:59.399459] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:05.856 malloc0 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:05.856 { 00:11:05.856 "params": { 00:11:05.856 "name": "Nvme$subsystem", 00:11:05.856 "trtype": "$TEST_TRANSPORT", 00:11:05.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:05.856 "adrfam": "ipv4", 00:11:05.856 "trsvcid": "$NVMF_PORT", 00:11:05.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:05.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:05.856 "hdgst": ${hdgst:-false}, 00:11:05.856 "ddgst": ${ddgst:-false} 00:11:05.856 }, 00:11:05.856 "method": "bdev_nvme_attach_controller" 00:11:05.856 } 00:11:05.856 EOF 00:11:05.856 )") 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:11:05.856 15:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:05.856 "params": { 00:11:05.856 "name": "Nvme1", 00:11:05.856 "trtype": "tcp", 00:11:05.856 "traddr": "10.0.0.2", 00:11:05.856 "adrfam": "ipv4", 00:11:05.856 "trsvcid": "4420", 00:11:05.856 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:05.856 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:05.856 "hdgst": false, 00:11:05.856 "ddgst": false 00:11:05.856 }, 00:11:05.856 "method": "bdev_nvme_attach_controller" 00:11:05.856 }' 00:11:05.856 [2024-07-15 15:56:59.506034] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:11:05.856 [2024-07-15 15:56:59.506132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76431 ] 00:11:06.115 [2024-07-15 15:56:59.650148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.115 [2024-07-15 15:56:59.783141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.373 Running I/O for 10 seconds... 00:11:16.340 00:11:16.340 Latency(us) 00:11:16.340 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:16.340 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:16.340 Verification LBA range: start 0x0 length 0x1000 00:11:16.340 Nvme1n1 : 10.02 5383.31 42.06 0.00 0.00 23706.45 2889.54 32410.53 00:11:16.340 =================================================================================================================== 00:11:16.340 Total : 5383.31 42.06 0.00 0.00 23706.45 2889.54 32410.53 00:11:16.597 15:57:10 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=76550 00:11:16.597 15:57:10 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:16.597 15:57:10 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:16.597 15:57:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:16.597 15:57:10 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:16.597 15:57:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:11:16.597 15:57:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:11:16.597 15:57:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:16.597 15:57:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:16.597 { 00:11:16.597 "params": { 00:11:16.597 "name": "Nvme$subsystem", 00:11:16.597 "trtype": "$TEST_TRANSPORT", 00:11:16.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:16.597 "adrfam": "ipv4", 00:11:16.597 "trsvcid": "$NVMF_PORT", 00:11:16.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:16.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:16.597 "hdgst": ${hdgst:-false}, 00:11:16.597 "ddgst": ${ddgst:-false} 00:11:16.597 }, 00:11:16.597 "method": "bdev_nvme_attach_controller" 00:11:16.597 } 00:11:16.597 EOF 00:11:16.597 )") 00:11:16.597 15:57:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:11:16.597 [2024-07-15 15:57:10.230269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.597 [2024-07-15 15:57:10.230316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.597 15:57:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:11:16.597 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.597 15:57:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:11:16.597 15:57:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:16.597 "params": { 00:11:16.597 "name": "Nvme1", 00:11:16.597 "trtype": "tcp", 00:11:16.597 "traddr": "10.0.0.2", 00:11:16.597 "adrfam": "ipv4", 00:11:16.597 "trsvcid": "4420", 00:11:16.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:16.597 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:16.597 "hdgst": false, 00:11:16.597 "ddgst": false 00:11:16.597 }, 00:11:16.597 "method": "bdev_nvme_attach_controller" 00:11:16.597 }' 00:11:16.597 [2024-07-15 15:57:10.242229] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.597 [2024-07-15 15:57:10.242261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.597 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.597 [2024-07-15 15:57:10.254224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.597 [2024-07-15 15:57:10.254263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.597 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.597 [2024-07-15 15:57:10.266225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.597 [2024-07-15 15:57:10.266255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.597 [2024-07-15 15:57:10.266850] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:11:16.597 [2024-07-15 15:57:10.266920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76550 ] 00:11:16.597 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.597 [2024-07-15 15:57:10.278226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.597 [2024-07-15 15:57:10.278255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.598 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.598 [2024-07-15 15:57:10.290235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.598 [2024-07-15 15:57:10.290263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.598 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.598 [2024-07-15 15:57:10.302236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.598 [2024-07-15 15:57:10.302296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.598 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.598 [2024-07-15 15:57:10.314239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.598 [2024-07-15 15:57:10.314299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.598 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.856 [2024-07-15 15:57:10.326243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.856 [2024-07-15 15:57:10.326271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.856 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.856 [2024-07-15 15:57:10.338249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.856 [2024-07-15 15:57:10.338278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.856 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.856 [2024-07-15 15:57:10.350256] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.856 [2024-07-15 15:57:10.350286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.856 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.856 [2024-07-15 15:57:10.362260] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.856 [2024-07-15 15:57:10.362290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.856 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.856 [2024-07-15 15:57:10.374283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.856 [2024-07-15 15:57:10.374312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.856 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.856 [2024-07-15 15:57:10.386272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.856 [2024-07-15 15:57:10.386305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.856 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.856 [2024-07-15 15:57:10.398280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.856 [2024-07-15 15:57:10.398315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.856 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.856 [2024-07-15 15:57:10.403655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.856 [2024-07-15 15:57:10.410310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.856 [2024-07-15 15:57:10.410348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.856 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.856 [2024-07-15 15:57:10.422295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.856 [2024-07-15 15:57:10.422332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.856 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.856 [2024-07-15 15:57:10.434286] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.856 [2024-07-15 15:57:10.434317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.856 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.856 [2024-07-15 15:57:10.446293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.856 [2024-07-15 15:57:10.446323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.856 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.856 [2024-07-15 15:57:10.458324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.857 [2024-07-15 15:57:10.458362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.857 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.857 [2024-07-15 15:57:10.470296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.857 [2024-07-15 15:57:10.470329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.857 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.857 [2024-07-15 15:57:10.482335] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.857 [2024-07-15 15:57:10.482377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.857 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.857 [2024-07-15 15:57:10.494312] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.857 [2024-07-15 15:57:10.494347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.857 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.857 [2024-07-15 15:57:10.506310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.857 [2024-07-15 15:57:10.506349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.857 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.857 [2024-07-15 15:57:10.518308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.857 [2024-07-15 15:57:10.518340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.857 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.857 [2024-07-15 15:57:10.530315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.857 [2024-07-15 15:57:10.530347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.857 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.857 [2024-07-15 15:57:10.535642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.857 [2024-07-15 15:57:10.542310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.857 [2024-07-15 15:57:10.542345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.857 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.857 [2024-07-15 15:57:10.554354] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.857 [2024-07-15 15:57:10.554396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.857 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.857 [2024-07-15 15:57:10.566350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.857 [2024-07-15 15:57:10.566390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.857 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.857 [2024-07-15 15:57:10.578355] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.857 [2024-07-15 15:57:10.578396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.857 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.116 [2024-07-15 15:57:10.590363] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.116 [2024-07-15 15:57:10.590402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.116 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.116 [2024-07-15 15:57:10.602367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.116 [2024-07-15 15:57:10.602405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.116 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.116 [2024-07-15 15:57:10.614374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.116 [2024-07-15 15:57:10.614415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.116 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.116 [2024-07-15 15:57:10.626364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.116 [2024-07-15 15:57:10.626403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.116 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.116 [2024-07-15 15:57:10.638372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.116 [2024-07-15 15:57:10.638408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.116 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.116 [2024-07-15 15:57:10.650368] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.116 [2024-07-15 15:57:10.650404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.116 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.116 [2024-07-15 15:57:10.662372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.116 [2024-07-15 15:57:10.662405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.116 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.116 [2024-07-15 15:57:10.674398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.116 [2024-07-15 15:57:10.674433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.116 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.116 [2024-07-15 15:57:10.686413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.116 [2024-07-15 15:57:10.686463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.116 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.116 [2024-07-15 15:57:10.698408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.116 [2024-07-15 15:57:10.698456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.116 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.116 [2024-07-15 15:57:10.710526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.116 [2024-07-15 15:57:10.710563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.116 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.116 Running I/O for 5 seconds... 00:11:17.116 [2024-07-15 15:57:10.722415] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.116 [2024-07-15 15:57:10.722443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.116 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.116 [2024-07-15 15:57:10.743152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.116 [2024-07-15 15:57:10.743217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.116 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.116 [2024-07-15 15:57:10.754785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.116 [2024-07-15 15:57:10.754823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.116 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.116 [2024-07-15 15:57:10.770582] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.116 [2024-07-15 15:57:10.770621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.116 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.116 [2024-07-15 15:57:10.787193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.116 [2024-07-15 15:57:10.787229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.116 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.116 [2024-07-15 15:57:10.804373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.116 [2024-07-15 15:57:10.804432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.116 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.116 [2024-07-15 15:57:10.821042] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.116 [2024-07-15 15:57:10.821106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.116 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.116 [2024-07-15 15:57:10.838053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.116 [2024-07-15 15:57:10.838091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.116 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.399 [2024-07-15 15:57:10.854903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.399 [2024-07-15 15:57:10.854935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.399 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.399 [2024-07-15 15:57:10.870231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.399 [2024-07-15 15:57:10.870262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.399 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.399 [2024-07-15 15:57:10.887377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.399 [2024-07-15 15:57:10.887409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.399 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.399 [2024-07-15 15:57:10.903598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.399 [2024-07-15 15:57:10.903646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.399 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.399 [2024-07-15 15:57:10.919243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.399 [2024-07-15 15:57:10.919274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.399 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.399 [2024-07-15 15:57:10.936117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.399 [2024-07-15 15:57:10.936149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.399 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.399 [2024-07-15 15:57:10.953226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.399 [2024-07-15 15:57:10.953276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.399 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.399 [2024-07-15 15:57:10.969704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.399 [2024-07-15 15:57:10.969755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.399 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.399 [2024-07-15 15:57:10.986488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.399 [2024-07-15 15:57:10.986542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.399 2024/07/15 15:57:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.399 [2024-07-15 15:57:11.003253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.399 [2024-07-15 15:57:11.003320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.399 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.399 [2024-07-15 15:57:11.019786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.399 [2024-07-15 15:57:11.019855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.399 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.399 [2024-07-15 15:57:11.030410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.399 [2024-07-15 15:57:11.030462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.399 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.399 [2024-07-15 15:57:11.046172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.399 [2024-07-15 15:57:11.046208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.399 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.399 [2024-07-15 15:57:11.057473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.399 [2024-07-15 15:57:11.057523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.399 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.399 [2024-07-15 15:57:11.073195] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.399 [2024-07-15 15:57:11.073250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.399 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.399 [2024-07-15 15:57:11.084722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.399 [2024-07-15 15:57:11.084789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.399 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.399 [2024-07-15 15:57:11.101471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.399 [2024-07-15 15:57:11.101521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.399 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.399 [2024-07-15 15:57:11.119448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.399 [2024-07-15 15:57:11.119498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.399 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.658 [2024-07-15 15:57:11.135585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.658 [2024-07-15 15:57:11.135636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.658 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.658 [2024-07-15 15:57:11.151077] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.658 [2024-07-15 15:57:11.151141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.658 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.658 [2024-07-15 15:57:11.170515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.658 [2024-07-15 15:57:11.170564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.659 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.659 [2024-07-15 15:57:11.187234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.659 [2024-07-15 15:57:11.187287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.659 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.659 [2024-07-15 15:57:11.201896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.659 [2024-07-15 15:57:11.201932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.659 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.659 [2024-07-15 15:57:11.218815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.659 [2024-07-15 15:57:11.218867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.659 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.659 [2024-07-15 15:57:11.236717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.659 [2024-07-15 15:57:11.236785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.659 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.659 [2024-07-15 15:57:11.253565] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.659 [2024-07-15 15:57:11.253606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.659 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.659 [2024-07-15 15:57:11.269499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.659 [2024-07-15 15:57:11.269542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.659 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.659 [2024-07-15 15:57:11.287091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.659 [2024-07-15 15:57:11.287130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.659 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.659 [2024-07-15 15:57:11.304608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.659 [2024-07-15 15:57:11.304665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.659 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.659 [2024-07-15 15:57:11.321116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.659 [2024-07-15 15:57:11.321155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.659 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.659 [2024-07-15 15:57:11.338494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.659 [2024-07-15 15:57:11.338532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.659 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.659 [2024-07-15 15:57:11.354835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.659 [2024-07-15 15:57:11.354876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.659 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.659 [2024-07-15 15:57:11.372805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.659 [2024-07-15 15:57:11.372845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.659 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.918 [2024-07-15 15:57:11.389195] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.918 [2024-07-15 15:57:11.389236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.918 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.918 [2024-07-15 15:57:11.405772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.918 [2024-07-15 15:57:11.405829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.918 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.918 [2024-07-15 15:57:11.421854] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.918 [2024-07-15 15:57:11.421924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.918 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.918 [2024-07-15 15:57:11.433364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.918 [2024-07-15 15:57:11.433420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.918 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.918 [2024-07-15 15:57:11.448803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.918 [2024-07-15 15:57:11.448846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.918 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.918 [2024-07-15 15:57:11.464466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.918 [2024-07-15 15:57:11.464509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.918 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.918 [2024-07-15 15:57:11.480483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.918 [2024-07-15 15:57:11.480526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.918 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.918 [2024-07-15 15:57:11.499815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.918 [2024-07-15 15:57:11.499858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.918 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.918 [2024-07-15 15:57:11.515054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.918 [2024-07-15 15:57:11.515107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.918 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.918 [2024-07-15 15:57:11.531772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.918 [2024-07-15 15:57:11.531814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.918 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.918 [2024-07-15 15:57:11.548020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.918 [2024-07-15 15:57:11.548077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.918 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.918 [2024-07-15 15:57:11.564422] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.918 [2024-07-15 15:57:11.564462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.918 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.918 [2024-07-15 15:57:11.575325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.918 [2024-07-15 15:57:11.575365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.918 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.918 [2024-07-15 15:57:11.590952] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.918 [2024-07-15 15:57:11.591007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.918 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.918 [2024-07-15 15:57:11.607456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.918 [2024-07-15 15:57:11.607498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.918 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.918 [2024-07-15 15:57:11.624715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.918 [2024-07-15 15:57:11.624760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.918 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.918 [2024-07-15 15:57:11.642344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.918 [2024-07-15 15:57:11.642386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.918 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.177 [2024-07-15 15:57:11.658081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.177 [2024-07-15 15:57:11.658130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.177 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.177 [2024-07-15 15:57:11.676041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.177 [2024-07-15 15:57:11.676108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.177 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.177 [2024-07-15 15:57:11.693130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.177 [2024-07-15 15:57:11.693188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.177 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.177 [2024-07-15 15:57:11.709016] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.178 [2024-07-15 15:57:11.709119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.178 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.178 [2024-07-15 15:57:11.720705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.178 [2024-07-15 15:57:11.720762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.178 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.178 [2024-07-15 15:57:11.735168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.178 [2024-07-15 15:57:11.735220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.178 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.178 [2024-07-15 15:57:11.746525] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.178 [2024-07-15 15:57:11.746566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.178 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.178 [2024-07-15 15:57:11.762795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.178 [2024-07-15 15:57:11.762836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.178 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.178 [2024-07-15 15:57:11.779474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.178 [2024-07-15 15:57:11.779515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.178 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.178 [2024-07-15 15:57:11.794920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.178 [2024-07-15 15:57:11.794989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.178 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.178 [2024-07-15 15:57:11.812004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.178 [2024-07-15 15:57:11.812045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.178 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.178 [2024-07-15 15:57:11.829161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.178 [2024-07-15 15:57:11.829202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.178 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.178 [2024-07-15 15:57:11.844882] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.178 [2024-07-15 15:57:11.844924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.178 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.178 [2024-07-15 15:57:11.863566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.178 [2024-07-15 15:57:11.863606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.178 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.178 [2024-07-15 15:57:11.880063] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.178 [2024-07-15 15:57:11.880103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.178 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.178 [2024-07-15 15:57:11.895979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.178 [2024-07-15 15:57:11.896030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.178 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.437 [2024-07-15 15:57:11.913382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.437 [2024-07-15 15:57:11.913424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.437 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.437 [2024-07-15 15:57:11.929562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.437 [2024-07-15 15:57:11.929616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.437 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.437 [2024-07-15 15:57:11.947313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.437 [2024-07-15 15:57:11.947370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.437 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.437 [2024-07-15 15:57:11.964337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.437 [2024-07-15 15:57:11.964373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.437 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.437 [2024-07-15 15:57:11.980601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.437 [2024-07-15 15:57:11.980637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.437 2024/07/15 15:57:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.437 [2024-07-15 15:57:11.997745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.437 [2024-07-15 15:57:11.997781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.437 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.437 [2024-07-15 15:57:12.013252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.437 [2024-07-15 15:57:12.013299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.437 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.437 [2024-07-15 15:57:12.029609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.437 [2024-07-15 15:57:12.029802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.437 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.437 [2024-07-15 15:57:12.046780] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.437 [2024-07-15 15:57:12.046946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.437 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.437 [2024-07-15 15:57:12.063685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.437 [2024-07-15 15:57:12.063863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.437 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.437 [2024-07-15 15:57:12.081332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.437 [2024-07-15 15:57:12.081375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.437 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.437 [2024-07-15 15:57:12.098213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.437 [2024-07-15 15:57:12.098300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.437 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.437 [2024-07-15 15:57:12.115937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.437 [2024-07-15 15:57:12.116011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.437 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.437 [2024-07-15 15:57:12.131346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.437 [2024-07-15 15:57:12.131388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.437 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.437 [2024-07-15 15:57:12.147891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.437 [2024-07-15 15:57:12.147951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.437 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.708 [2024-07-15 15:57:12.165209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.708 [2024-07-15 15:57:12.165286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.708 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.708 [2024-07-15 15:57:12.181023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.708 [2024-07-15 15:57:12.181110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.708 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.708 [2024-07-15 15:57:12.191895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.708 [2024-07-15 15:57:12.191938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.708 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.708 [2024-07-15 15:57:12.208177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.708 [2024-07-15 15:57:12.208238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.708 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.708 [2024-07-15 15:57:12.225054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.708 [2024-07-15 15:57:12.225122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.708 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.708 [2024-07-15 15:57:12.241607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.708 [2024-07-15 15:57:12.241664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.708 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.708 [2024-07-15 15:57:12.259115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.708 [2024-07-15 15:57:12.259155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.708 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.708 [2024-07-15 15:57:12.275745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.708 [2024-07-15 15:57:12.275788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.709 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.709 [2024-07-15 15:57:12.293425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.709 [2024-07-15 15:57:12.293467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.709 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.709 [2024-07-15 15:57:12.309931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.709 [2024-07-15 15:57:12.309986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.709 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.709 [2024-07-15 15:57:12.325224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.709 [2024-07-15 15:57:12.325297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.709 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.709 [2024-07-15 15:57:12.342217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.709 [2024-07-15 15:57:12.342299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.709 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.709 [2024-07-15 15:57:12.358091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.709 [2024-07-15 15:57:12.358132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.709 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.709 [2024-07-15 15:57:12.369369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.709 [2024-07-15 15:57:12.369410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.709 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.709 [2024-07-15 15:57:12.385543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.709 [2024-07-15 15:57:12.385584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.709 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.709 [2024-07-15 15:57:12.402638] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.709 [2024-07-15 15:57:12.402677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.709 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.709 [2024-07-15 15:57:12.419648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.709 [2024-07-15 15:57:12.419690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.709 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.997 [2024-07-15 15:57:12.435788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.997 [2024-07-15 15:57:12.435834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.997 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.997 [2024-07-15 15:57:12.452840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.997 [2024-07-15 15:57:12.452884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.997 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.997 [2024-07-15 15:57:12.469583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.997 [2024-07-15 15:57:12.469623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.997 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.997 [2024-07-15 15:57:12.486528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.997 [2024-07-15 15:57:12.486568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.997 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.997 [2024-07-15 15:57:12.502671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.997 [2024-07-15 15:57:12.502712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.997 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.997 [2024-07-15 15:57:12.519789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.997 [2024-07-15 15:57:12.519831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.997 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.997 [2024-07-15 15:57:12.536814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.998 [2024-07-15 15:57:12.536859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.998 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.998 [2024-07-15 15:57:12.553267] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.998 [2024-07-15 15:57:12.553308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.998 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.998 [2024-07-15 15:57:12.571157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.998 [2024-07-15 15:57:12.571197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.998 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.998 [2024-07-15 15:57:12.587618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.998 [2024-07-15 15:57:12.587701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.998 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.998 [2024-07-15 15:57:12.603459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.998 [2024-07-15 15:57:12.603499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.998 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.998 [2024-07-15 15:57:12.620251] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.998 [2024-07-15 15:57:12.620294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.998 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.998 [2024-07-15 15:57:12.637448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.998 [2024-07-15 15:57:12.637506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.998 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.998 [2024-07-15 15:57:12.653150] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.998 [2024-07-15 15:57:12.653192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.998 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.998 [2024-07-15 15:57:12.669379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.998 [2024-07-15 15:57:12.669422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.998 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.998 [2024-07-15 15:57:12.686102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.998 [2024-07-15 15:57:12.686155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.998 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.998 [2024-07-15 15:57:12.703735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.998 [2024-07-15 15:57:12.703778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.998 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.998 [2024-07-15 15:57:12.720129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.998 [2024-07-15 15:57:12.720169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.998 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.257 [2024-07-15 15:57:12.737943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.257 [2024-07-15 15:57:12.737996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.257 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.257 [2024-07-15 15:57:12.748579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.257 [2024-07-15 15:57:12.748618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.257 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.257 [2024-07-15 15:57:12.765389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.257 [2024-07-15 15:57:12.765434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.257 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.257 [2024-07-15 15:57:12.780947] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.257 [2024-07-15 15:57:12.781022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.257 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.257 [2024-07-15 15:57:12.791756] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.257 [2024-07-15 15:57:12.791798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.257 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.257 [2024-07-15 15:57:12.808418] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.257 [2024-07-15 15:57:12.808460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.257 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.257 [2024-07-15 15:57:12.823879] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.257 [2024-07-15 15:57:12.823921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.257 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.257 [2024-07-15 15:57:12.834979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.257 [2024-07-15 15:57:12.835020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.257 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.257 [2024-07-15 15:57:12.850407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.257 [2024-07-15 15:57:12.850451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.257 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.257 [2024-07-15 15:57:12.866576] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.257 [2024-07-15 15:57:12.866643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.257 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.257 [2024-07-15 15:57:12.884615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.257 [2024-07-15 15:57:12.884672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.257 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.257 [2024-07-15 15:57:12.898642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.257 [2024-07-15 15:57:12.898690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.257 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.257 [2024-07-15 15:57:12.914987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.257 [2024-07-15 15:57:12.915061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.257 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.257 [2024-07-15 15:57:12.931963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.257 [2024-07-15 15:57:12.932020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.257 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.257 [2024-07-15 15:57:12.949568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.257 [2024-07-15 15:57:12.949625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.257 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.257 [2024-07-15 15:57:12.964000] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.257 [2024-07-15 15:57:12.964041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.257 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.257 [2024-07-15 15:57:12.980107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.257 [2024-07-15 15:57:12.980147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.257 2024/07/15 15:57:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.516 [2024-07-15 15:57:12.997862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.516 [2024-07-15 15:57:12.997918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.516 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.516 [2024-07-15 15:57:13.014161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.516 [2024-07-15 15:57:13.014205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.516 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.516 [2024-07-15 15:57:13.031797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.516 [2024-07-15 15:57:13.031843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.516 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.516 [2024-07-15 15:57:13.047249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.516 [2024-07-15 15:57:13.047305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.517 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.517 [2024-07-15 15:57:13.063430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.517 [2024-07-15 15:57:13.063475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.517 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.517 [2024-07-15 15:57:13.080006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.517 [2024-07-15 15:57:13.080067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.517 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.517 [2024-07-15 15:57:13.091673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.517 [2024-07-15 15:57:13.091712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.517 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.517 [2024-07-15 15:57:13.106119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.517 [2024-07-15 15:57:13.106156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.517 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.517 [2024-07-15 15:57:13.123255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.517 [2024-07-15 15:57:13.123289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.517 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.517 [2024-07-15 15:57:13.139147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.517 [2024-07-15 15:57:13.139183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.517 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.517 [2024-07-15 15:57:13.151176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.517 [2024-07-15 15:57:13.151213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.517 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.517 [2024-07-15 15:57:13.162872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.517 [2024-07-15 15:57:13.162911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.517 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.517 [2024-07-15 15:57:13.179351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.517 [2024-07-15 15:57:13.179390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.517 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.517 [2024-07-15 15:57:13.195689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.517 [2024-07-15 15:57:13.195727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.517 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.517 [2024-07-15 15:57:13.213289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.517 [2024-07-15 15:57:13.213348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.517 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.517 [2024-07-15 15:57:13.229931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.517 [2024-07-15 15:57:13.229985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.517 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.775 [2024-07-15 15:57:13.246999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.775 [2024-07-15 15:57:13.247051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.775 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.775 [2024-07-15 15:57:13.263151] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.775 [2024-07-15 15:57:13.263210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.775 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.775 [2024-07-15 15:57:13.279312] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.775 [2024-07-15 15:57:13.279370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.775 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.775 [2024-07-15 15:57:13.291398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.775 [2024-07-15 15:57:13.291468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.775 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.775 [2024-07-15 15:57:13.304371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.775 [2024-07-15 15:57:13.304444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.775 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.775 [2024-07-15 15:57:13.320986] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.775 [2024-07-15 15:57:13.321071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.775 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.775 [2024-07-15 15:57:13.337657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.775 [2024-07-15 15:57:13.337700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.775 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.775 [2024-07-15 15:57:13.353130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.775 [2024-07-15 15:57:13.353188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.775 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.775 [2024-07-15 15:57:13.369819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.775 [2024-07-15 15:57:13.369871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.775 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.775 [2024-07-15 15:57:13.385777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.775 [2024-07-15 15:57:13.385822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.775 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.775 [2024-07-15 15:57:13.397106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.775 [2024-07-15 15:57:13.397161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.775 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.775 [2024-07-15 15:57:13.411660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.775 [2024-07-15 15:57:13.411701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.776 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.776 [2024-07-15 15:57:13.427993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.776 [2024-07-15 15:57:13.428061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.776 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.776 [2024-07-15 15:57:13.445778] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.776 [2024-07-15 15:57:13.445824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.776 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.776 [2024-07-15 15:57:13.462768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.776 [2024-07-15 15:57:13.462812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.776 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.776 [2024-07-15 15:57:13.479844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.776 [2024-07-15 15:57:13.479891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.776 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.776 [2024-07-15 15:57:13.497220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.776 [2024-07-15 15:57:13.497309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.776 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.034 [2024-07-15 15:57:13.512557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.034 [2024-07-15 15:57:13.512634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.034 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.034 [2024-07-15 15:57:13.523888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.034 [2024-07-15 15:57:13.523927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.034 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.034 [2024-07-15 15:57:13.539663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.034 [2024-07-15 15:57:13.539710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.034 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.034 [2024-07-15 15:57:13.556458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.034 [2024-07-15 15:57:13.556503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.034 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.034 [2024-07-15 15:57:13.573624] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.034 [2024-07-15 15:57:13.573671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.034 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.034 [2024-07-15 15:57:13.587902] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.034 [2024-07-15 15:57:13.587969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.034 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.034 [2024-07-15 15:57:13.604508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.034 [2024-07-15 15:57:13.604568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.034 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.034 [2024-07-15 15:57:13.620939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.034 [2024-07-15 15:57:13.621010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.034 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.034 [2024-07-15 15:57:13.638074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.034 [2024-07-15 15:57:13.638118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.034 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.034 [2024-07-15 15:57:13.654474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.034 [2024-07-15 15:57:13.654514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.034 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.034 [2024-07-15 15:57:13.670039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.034 [2024-07-15 15:57:13.670088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.034 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.034 [2024-07-15 15:57:13.685515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.034 [2024-07-15 15:57:13.685590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.034 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.034 [2024-07-15 15:57:13.695548] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.034 [2024-07-15 15:57:13.695631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.034 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.034 [2024-07-15 15:57:13.711512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.034 [2024-07-15 15:57:13.711577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.034 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.034 [2024-07-15 15:57:13.727565] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.034 [2024-07-15 15:57:13.727622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.034 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.034 [2024-07-15 15:57:13.743321] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.034 [2024-07-15 15:57:13.743379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.034 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.034 [2024-07-15 15:57:13.760607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.034 [2024-07-15 15:57:13.760698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.293 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.293 [2024-07-15 15:57:13.776717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.293 [2024-07-15 15:57:13.776760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.293 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.293 [2024-07-15 15:57:13.794407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.293 [2024-07-15 15:57:13.794467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.293 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.293 [2024-07-15 15:57:13.810723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.293 [2024-07-15 15:57:13.810782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.293 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.293 [2024-07-15 15:57:13.827278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.293 [2024-07-15 15:57:13.827354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.293 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.293 [2024-07-15 15:57:13.844363] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.293 [2024-07-15 15:57:13.844425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.293 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.293 [2024-07-15 15:57:13.862506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.293 [2024-07-15 15:57:13.862564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.293 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.293 [2024-07-15 15:57:13.878515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.293 [2024-07-15 15:57:13.878573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.293 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.293 [2024-07-15 15:57:13.895037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.293 [2024-07-15 15:57:13.895077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.293 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.293 [2024-07-15 15:57:13.912729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.294 [2024-07-15 15:57:13.912775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.294 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.294 [2024-07-15 15:57:13.928507] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.294 [2024-07-15 15:57:13.928560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.294 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.294 [2024-07-15 15:57:13.947615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.294 [2024-07-15 15:57:13.947667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.294 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.294 [2024-07-15 15:57:13.963250] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.294 [2024-07-15 15:57:13.963319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.294 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.294 [2024-07-15 15:57:13.979148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.294 [2024-07-15 15:57:13.979201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.294 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.294 [2024-07-15 15:57:13.995038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.294 [2024-07-15 15:57:13.995094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.294 2024/07/15 15:57:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.294 [2024-07-15 15:57:14.011192] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.294 [2024-07-15 15:57:14.011264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.294 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.553 [2024-07-15 15:57:14.026737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.553 [2024-07-15 15:57:14.026789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.553 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.553 [2024-07-15 15:57:14.043221] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.553 [2024-07-15 15:57:14.043296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.553 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.553 [2024-07-15 15:57:14.060353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.553 [2024-07-15 15:57:14.060413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.553 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.553 [2024-07-15 15:57:14.070771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.553 [2024-07-15 15:57:14.070812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.553 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.553 [2024-07-15 15:57:14.086551] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.553 [2024-07-15 15:57:14.086624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.553 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.554 [2024-07-15 15:57:14.101228] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.554 [2024-07-15 15:57:14.101287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.554 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.554 [2024-07-15 15:57:14.118056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.554 [2024-07-15 15:57:14.118098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.554 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.554 [2024-07-15 15:57:14.135001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.554 [2024-07-15 15:57:14.135087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.554 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.554 [2024-07-15 15:57:14.152324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.554 [2024-07-15 15:57:14.152384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.554 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.554 [2024-07-15 15:57:14.163385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.554 [2024-07-15 15:57:14.163442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.554 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.554 [2024-07-15 15:57:14.178736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.554 [2024-07-15 15:57:14.178779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.554 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.554 [2024-07-15 15:57:14.194715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.554 [2024-07-15 15:57:14.194761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.554 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.554 [2024-07-15 15:57:14.205723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.554 [2024-07-15 15:57:14.205768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.554 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.554 [2024-07-15 15:57:14.221223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.554 [2024-07-15 15:57:14.221271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.554 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.554 [2024-07-15 15:57:14.237370] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.554 [2024-07-15 15:57:14.237416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.554 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.554 [2024-07-15 15:57:14.253635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.554 [2024-07-15 15:57:14.253682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.554 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.554 [2024-07-15 15:57:14.269865] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.554 [2024-07-15 15:57:14.269914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.554 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.812 [2024-07-15 15:57:14.287299] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.812 [2024-07-15 15:57:14.287340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.812 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.812 [2024-07-15 15:57:14.304084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.812 [2024-07-15 15:57:14.304148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.812 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.812 [2024-07-15 15:57:14.320754] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.812 [2024-07-15 15:57:14.320794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.812 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.812 [2024-07-15 15:57:14.336019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.812 [2024-07-15 15:57:14.336055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.812 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.812 [2024-07-15 15:57:14.352243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.812 [2024-07-15 15:57:14.352283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.812 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.812 [2024-07-15 15:57:14.368231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.812 [2024-07-15 15:57:14.368282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.812 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.812 [2024-07-15 15:57:14.378513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.812 [2024-07-15 15:57:14.378557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.812 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.812 [2024-07-15 15:57:14.391706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.812 [2024-07-15 15:57:14.391744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.813 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.813 [2024-07-15 15:57:14.403317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.813 [2024-07-15 15:57:14.403357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.813 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.813 [2024-07-15 15:57:14.419573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.813 [2024-07-15 15:57:14.419616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.813 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.813 [2024-07-15 15:57:14.436768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.813 [2024-07-15 15:57:14.436816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.813 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.813 [2024-07-15 15:57:14.452828] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.813 [2024-07-15 15:57:14.452870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.813 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.813 [2024-07-15 15:57:14.470550] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.813 [2024-07-15 15:57:14.470592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.813 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.813 [2024-07-15 15:57:14.487153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.813 [2024-07-15 15:57:14.487197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.813 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.813 [2024-07-15 15:57:14.503267] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.813 [2024-07-15 15:57:14.503331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.813 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.813 [2024-07-15 15:57:14.519709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.813 [2024-07-15 15:57:14.519755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.813 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.813 [2024-07-15 15:57:14.531666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.813 [2024-07-15 15:57:14.531710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.813 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.072 [2024-07-15 15:57:14.548286] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.072 [2024-07-15 15:57:14.548338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.072 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.072 [2024-07-15 15:57:14.562641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.072 [2024-07-15 15:57:14.562688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.072 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.072 [2024-07-15 15:57:14.579663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.072 [2024-07-15 15:57:14.579732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.072 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.072 [2024-07-15 15:57:14.597261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.072 [2024-07-15 15:57:14.597341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.072 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.072 [2024-07-15 15:57:14.614605] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.072 [2024-07-15 15:57:14.614664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.072 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.072 [2024-07-15 15:57:14.632209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.072 [2024-07-15 15:57:14.632260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.072 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.072 [2024-07-15 15:57:14.648417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.072 [2024-07-15 15:57:14.648474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.072 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.072 [2024-07-15 15:57:14.665309] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.072 [2024-07-15 15:57:14.665370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.072 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.072 [2024-07-15 15:57:14.681987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.072 [2024-07-15 15:57:14.682029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.072 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.072 [2024-07-15 15:57:14.698094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.072 [2024-07-15 15:57:14.698135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.072 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.072 [2024-07-15 15:57:14.714679] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.072 [2024-07-15 15:57:14.714722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.072 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.072 [2024-07-15 15:57:14.732447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.072 [2024-07-15 15:57:14.732508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.072 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.072 [2024-07-15 15:57:14.748295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.072 [2024-07-15 15:57:14.748354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.072 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.072 [2024-07-15 15:57:14.758869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.072 [2024-07-15 15:57:14.758911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.072 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.072 [2024-07-15 15:57:14.774458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.072 [2024-07-15 15:57:14.774500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.072 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.072 [2024-07-15 15:57:14.786766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.072 [2024-07-15 15:57:14.786810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.072 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.330 [2024-07-15 15:57:14.803006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.330 [2024-07-15 15:57:14.803091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.330 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.330 [2024-07-15 15:57:14.820246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.330 [2024-07-15 15:57:14.820308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.330 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.330 [2024-07-15 15:57:14.836801] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.330 [2024-07-15 15:57:14.836846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.330 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.330 [2024-07-15 15:57:14.854075] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.330 [2024-07-15 15:57:14.854118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.331 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.331 [2024-07-15 15:57:14.865612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.331 [2024-07-15 15:57:14.865683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.331 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.331 [2024-07-15 15:57:14.882255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.331 [2024-07-15 15:57:14.882323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.331 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.331 [2024-07-15 15:57:14.900058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.331 [2024-07-15 15:57:14.900114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.331 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.331 [2024-07-15 15:57:14.916904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.331 [2024-07-15 15:57:14.916946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.331 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.331 [2024-07-15 15:57:14.933171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.331 [2024-07-15 15:57:14.933230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.331 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.331 [2024-07-15 15:57:14.949561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.331 [2024-07-15 15:57:14.949634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.331 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.331 [2024-07-15 15:57:14.967137] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.331 [2024-07-15 15:57:14.967216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.331 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.331 [2024-07-15 15:57:14.984128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.331 [2024-07-15 15:57:14.984202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.331 2024/07/15 15:57:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.331 [2024-07-15 15:57:15.000467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.331 [2024-07-15 15:57:15.000540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.331 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.331 [2024-07-15 15:57:15.018247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.331 [2024-07-15 15:57:15.018299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.331 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.331 [2024-07-15 15:57:15.029285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.331 [2024-07-15 15:57:15.029328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.331 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.331 [2024-07-15 15:57:15.043776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.331 [2024-07-15 15:57:15.043820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.331 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.590 [2024-07-15 15:57:15.060474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.590 [2024-07-15 15:57:15.060536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.590 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.590 [2024-07-15 15:57:15.077005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.590 [2024-07-15 15:57:15.077076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.590 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.590 [2024-07-15 15:57:15.093223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.590 [2024-07-15 15:57:15.093281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.590 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.590 [2024-07-15 15:57:15.108755] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.590 [2024-07-15 15:57:15.108799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.590 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.590 [2024-07-15 15:57:15.125700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.590 [2024-07-15 15:57:15.125743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.590 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.590 [2024-07-15 15:57:15.141668] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.590 [2024-07-15 15:57:15.141710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.590 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.590 [2024-07-15 15:57:15.159903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.590 [2024-07-15 15:57:15.159968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.590 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.590 [2024-07-15 15:57:15.176463] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.590 [2024-07-15 15:57:15.176535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.590 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.590 [2024-07-15 15:57:15.193922] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.590 [2024-07-15 15:57:15.193993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.590 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.590 [2024-07-15 15:57:15.208242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.590 [2024-07-15 15:57:15.208286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.590 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.590 [2024-07-15 15:57:15.224757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.590 [2024-07-15 15:57:15.224816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.590 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.590 [2024-07-15 15:57:15.241337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.590 [2024-07-15 15:57:15.241397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.590 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.590 [2024-07-15 15:57:15.260668] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.590 [2024-07-15 15:57:15.260711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.590 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.590 [2024-07-15 15:57:15.277437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.590 [2024-07-15 15:57:15.277496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.590 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.590 [2024-07-15 15:57:15.294046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.590 [2024-07-15 15:57:15.294088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.590 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.590 [2024-07-15 15:57:15.310265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.590 [2024-07-15 15:57:15.310334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.590 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.849 [2024-07-15 15:57:15.327165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.849 [2024-07-15 15:57:15.327221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.849 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.849 [2024-07-15 15:57:15.343210] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.849 [2024-07-15 15:57:15.343266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.849 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.849 [2024-07-15 15:57:15.353758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.849 [2024-07-15 15:57:15.353797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.849 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.849 [2024-07-15 15:57:15.369193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.849 [2024-07-15 15:57:15.369233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.849 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.849 [2024-07-15 15:57:15.386016] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.849 [2024-07-15 15:57:15.386052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.849 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.849 [2024-07-15 15:57:15.397278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.849 [2024-07-15 15:57:15.397314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.849 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.849 [2024-07-15 15:57:15.411969] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.849 [2024-07-15 15:57:15.412019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.849 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.849 [2024-07-15 15:57:15.422399] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.849 [2024-07-15 15:57:15.422437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.849 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.849 [2024-07-15 15:57:15.437149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.849 [2024-07-15 15:57:15.437216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.849 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.849 [2024-07-15 15:57:15.453521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.849 [2024-07-15 15:57:15.453565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.849 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.849 [2024-07-15 15:57:15.470578] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.849 [2024-07-15 15:57:15.470622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.849 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.849 [2024-07-15 15:57:15.486621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.849 [2024-07-15 15:57:15.486662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.849 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.849 [2024-07-15 15:57:15.503718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.849 [2024-07-15 15:57:15.503761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.849 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.849 [2024-07-15 15:57:15.518337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.849 [2024-07-15 15:57:15.518394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.849 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.849 [2024-07-15 15:57:15.534776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.849 [2024-07-15 15:57:15.534818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.849 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.849 [2024-07-15 15:57:15.552088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.850 [2024-07-15 15:57:15.552168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.850 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.850 [2024-07-15 15:57:15.570241] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.850 [2024-07-15 15:57:15.570282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.850 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.108 [2024-07-15 15:57:15.585707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.108 [2024-07-15 15:57:15.585746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.108 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.108 [2024-07-15 15:57:15.602224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.108 [2024-07-15 15:57:15.602265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.108 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.108 [2024-07-15 15:57:15.612394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.108 [2024-07-15 15:57:15.612441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.108 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.108 [2024-07-15 15:57:15.628227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.108 [2024-07-15 15:57:15.628268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.108 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.108 [2024-07-15 15:57:15.643911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.108 [2024-07-15 15:57:15.643953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.108 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.108 [2024-07-15 15:57:15.663522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.108 [2024-07-15 15:57:15.663566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.108 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.108 [2024-07-15 15:57:15.680585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.108 [2024-07-15 15:57:15.680627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.108 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.108 [2024-07-15 15:57:15.695005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.108 [2024-07-15 15:57:15.695046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.108 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.108 [2024-07-15 15:57:15.711337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.108 [2024-07-15 15:57:15.711382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.108 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.108 [2024-07-15 15:57:15.726268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.108 [2024-07-15 15:57:15.726309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.108 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.108 00:11:22.108 Latency(us) 00:11:22.108 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:22.108 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:22.108 Nvme1n1 : 5.01 10508.63 82.10 0.00 0.00 12164.65 4974.78 23235.49 00:11:22.108 =================================================================================================================== 00:11:22.108 Total : 10508.63 82.10 0.00 0.00 12164.65 4974.78 23235.49 00:11:22.108 [2024-07-15 15:57:15.736746] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.108 [2024-07-15 15:57:15.736789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.108 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.108 [2024-07-15 15:57:15.748746] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.108 [2024-07-15 15:57:15.748779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.108 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.108 [2024-07-15 15:57:15.760768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.108 [2024-07-15 15:57:15.760822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.109 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.109 [2024-07-15 15:57:15.772780] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.109 [2024-07-15 15:57:15.772824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.109 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.109 [2024-07-15 15:57:15.784779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.109 [2024-07-15 15:57:15.784823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.109 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.109 [2024-07-15 15:57:15.796786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.109 [2024-07-15 15:57:15.796833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.109 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.109 [2024-07-15 15:57:15.808805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.109 [2024-07-15 15:57:15.808849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.109 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.109 [2024-07-15 15:57:15.820801] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.109 [2024-07-15 15:57:15.820846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.109 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.109 [2024-07-15 15:57:15.832803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.109 [2024-07-15 15:57:15.832848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.367 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.367 [2024-07-15 15:57:15.844808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.367 [2024-07-15 15:57:15.844854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.367 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.367 [2024-07-15 15:57:15.856799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.367 [2024-07-15 15:57:15.856844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.367 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.367 [2024-07-15 15:57:15.868808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.367 [2024-07-15 15:57:15.868848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.367 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.367 [2024-07-15 15:57:15.880792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.367 [2024-07-15 15:57:15.880825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.367 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.367 [2024-07-15 15:57:15.892789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.367 [2024-07-15 15:57:15.892834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.367 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.367 [2024-07-15 15:57:15.904814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.367 [2024-07-15 15:57:15.904858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.368 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.368 [2024-07-15 15:57:15.916796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.368 [2024-07-15 15:57:15.916832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.368 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.368 [2024-07-15 15:57:15.928794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.368 [2024-07-15 15:57:15.928827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.368 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.368 [2024-07-15 15:57:15.940830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.368 [2024-07-15 15:57:15.940875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.368 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.368 [2024-07-15 15:57:15.952816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.368 [2024-07-15 15:57:15.952854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.368 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.368 [2024-07-15 15:57:15.964800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.368 [2024-07-15 15:57:15.964836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.368 2024/07/15 15:57:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:22.368 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (76550) - No such process 00:11:22.368 15:57:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 76550 00:11:22.368 15:57:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:22.368 15:57:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.368 15:57:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:22.368 15:57:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.368 15:57:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:22.368 15:57:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.368 15:57:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:22.368 delay0 00:11:22.368 15:57:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.368 15:57:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:22.368 15:57:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.368 15:57:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:22.368 15:57:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.368 15:57:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:11:22.627 [2024-07-15 15:57:16.166897] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:29.232 Initializing NVMe Controllers 00:11:29.232 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:29.232 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:29.232 Initialization complete. Launching workers. 00:11:29.232 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 105 00:11:29.232 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 392, failed to submit 33 00:11:29.232 success 226, unsuccess 166, failed 0 00:11:29.232 15:57:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:29.232 15:57:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:29.232 15:57:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:29.232 15:57:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:11:29.232 15:57:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:29.232 15:57:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:11:29.232 15:57:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:29.232 15:57:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:29.232 rmmod nvme_tcp 00:11:29.232 rmmod nvme_fabrics 00:11:29.232 rmmod nvme_keyring 00:11:29.232 15:57:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:29.232 15:57:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:11:29.232 15:57:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:11:29.232 15:57:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 76380 ']' 00:11:29.232 15:57:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 76380 00:11:29.233 15:57:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 76380 ']' 00:11:29.233 15:57:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 76380 00:11:29.233 15:57:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:11:29.233 15:57:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:29.233 15:57:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76380 00:11:29.233 15:57:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:29.233 15:57:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:29.233 killing process with pid 76380 00:11:29.233 15:57:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76380' 00:11:29.233 15:57:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 76380 00:11:29.233 15:57:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 76380 00:11:29.233 15:57:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:29.233 15:57:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:29.233 15:57:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:29.233 15:57:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:29.233 15:57:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:29.233 15:57:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.233 15:57:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:29.233 15:57:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.233 15:57:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:29.233 00:11:29.233 real 0m24.873s 00:11:29.233 user 0m39.783s 00:11:29.233 sys 0m7.038s 00:11:29.233 15:57:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:29.233 ************************************ 00:11:29.233 END TEST nvmf_zcopy 00:11:29.233 15:57:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:29.233 ************************************ 00:11:29.233 15:57:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:29.233 15:57:22 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:29.233 15:57:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:29.233 15:57:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:29.233 15:57:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:29.233 ************************************ 00:11:29.233 START TEST nvmf_nmic 00:11:29.233 ************************************ 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:29.233 * Looking for test storage... 00:11:29.233 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:29.233 Cannot find device "nvmf_tgt_br" 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:29.233 Cannot find device "nvmf_tgt_br2" 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:29.233 Cannot find device "nvmf_tgt_br" 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:29.233 Cannot find device "nvmf_tgt_br2" 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:29.233 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:29.233 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:29.233 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:29.234 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:29.492 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:29.492 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:29.492 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:29.492 15:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:29.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:29.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:11:29.492 00:11:29.492 --- 10.0.0.2 ping statistics --- 00:11:29.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.492 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:29.492 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:29.492 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:11:29.492 00:11:29.492 --- 10.0.0.3 ping statistics --- 00:11:29.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.492 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:29.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:29.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:11:29.492 00:11:29.492 --- 10.0.0.1 ping statistics --- 00:11:29.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.492 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=76876 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 76876 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 76876 ']' 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:29.492 15:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:29.749 [2024-07-15 15:57:23.222553] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:11:29.749 [2024-07-15 15:57:23.222643] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:29.749 [2024-07-15 15:57:23.362883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:30.007 [2024-07-15 15:57:23.484680] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:30.007 [2024-07-15 15:57:23.484770] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:30.007 [2024-07-15 15:57:23.484783] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:30.007 [2024-07-15 15:57:23.484794] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:30.007 [2024-07-15 15:57:23.484803] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:30.007 [2024-07-15 15:57:23.484940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.007 [2024-07-15 15:57:23.485759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:30.007 [2024-07-15 15:57:23.485994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:30.007 [2024-07-15 15:57:23.486071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.573 15:57:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:30.573 15:57:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:11:30.573 15:57:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:30.573 15:57:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:30.573 15:57:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:30.573 15:57:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:30.573 15:57:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:30.573 15:57:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.573 15:57:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:30.573 [2024-07-15 15:57:24.255800] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:30.573 15:57:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.573 15:57:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:30.573 15:57:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.573 15:57:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:30.831 Malloc0 00:11:30.831 15:57:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.831 15:57:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:30.831 15:57:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.831 15:57:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:30.831 15:57:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.831 15:57:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:30.831 15:57:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.831 15:57:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:30.831 15:57:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.831 15:57:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:30.831 15:57:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.831 15:57:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:30.831 [2024-07-15 15:57:24.331846] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:30.831 15:57:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.831 test case1: single bdev can't be used in multiple subsystems 00:11:30.832 15:57:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:30.832 15:57:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:30.832 15:57:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.832 15:57:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:30.832 15:57:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.832 15:57:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:30.832 15:57:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.832 15:57:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:30.832 15:57:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.832 15:57:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:30.832 15:57:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:30.832 15:57:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.832 15:57:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:30.832 [2024-07-15 15:57:24.355695] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:30.832 [2024-07-15 15:57:24.355730] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:30.832 [2024-07-15 15:57:24.355741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.832 2024/07/15 15:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.832 request: 00:11:30.832 { 00:11:30.832 "method": "nvmf_subsystem_add_ns", 00:11:30.832 "params": { 00:11:30.832 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:30.832 "namespace": { 00:11:30.832 "bdev_name": "Malloc0", 00:11:30.832 "no_auto_visible": false 00:11:30.832 } 00:11:30.832 } 00:11:30.832 } 00:11:30.832 Got JSON-RPC error response 00:11:30.832 GoRPCClient: error on JSON-RPC call 00:11:30.832 15:57:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:11:30.832 15:57:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:30.832 15:57:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:30.832 Adding namespace failed - expected result. 00:11:30.832 15:57:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:30.832 test case2: host connect to nvmf target in multiple paths 00:11:30.832 15:57:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:30.832 15:57:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:30.832 15:57:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.832 15:57:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:30.832 [2024-07-15 15:57:24.367824] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:30.832 15:57:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.832 15:57:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid=a185c444-aaeb-4d13-aa60-df1b0266600d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:30.832 15:57:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid=a185c444-aaeb-4d13-aa60-df1b0266600d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:31.090 15:57:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:31.090 15:57:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:11:31.090 15:57:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:31.090 15:57:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:31.090 15:57:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:11:32.994 15:57:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:32.994 15:57:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:32.994 15:57:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:33.252 15:57:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:33.252 15:57:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:33.252 15:57:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:11:33.252 15:57:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:33.252 [global] 00:11:33.252 thread=1 00:11:33.252 invalidate=1 00:11:33.252 rw=write 00:11:33.252 time_based=1 00:11:33.252 runtime=1 00:11:33.252 ioengine=libaio 00:11:33.252 direct=1 00:11:33.252 bs=4096 00:11:33.252 iodepth=1 00:11:33.252 norandommap=0 00:11:33.252 numjobs=1 00:11:33.252 00:11:33.252 verify_dump=1 00:11:33.252 verify_backlog=512 00:11:33.252 verify_state_save=0 00:11:33.252 do_verify=1 00:11:33.252 verify=crc32c-intel 00:11:33.252 [job0] 00:11:33.252 filename=/dev/nvme0n1 00:11:33.252 Could not set queue depth (nvme0n1) 00:11:33.252 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:33.252 fio-3.35 00:11:33.252 Starting 1 thread 00:11:34.627 00:11:34.627 job0: (groupid=0, jobs=1): err= 0: pid=76991: Mon Jul 15 15:57:28 2024 00:11:34.627 read: IOPS=3304, BW=12.9MiB/s (13.5MB/s)(12.9MiB/1001msec) 00:11:34.627 slat (nsec): min=12902, max=49665, avg=16475.50, stdev=4114.50 00:11:34.627 clat (usec): min=125, max=287, avg=144.06, stdev=11.29 00:11:34.627 lat (usec): min=138, max=304, avg=160.54, stdev=12.42 00:11:34.627 clat percentiles (usec): 00:11:34.627 | 1.00th=[ 129], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 137], 00:11:34.627 | 30.00th=[ 139], 40.00th=[ 141], 50.00th=[ 143], 60.00th=[ 145], 00:11:34.628 | 70.00th=[ 147], 80.00th=[ 151], 90.00th=[ 159], 95.00th=[ 165], 00:11:34.628 | 99.00th=[ 184], 99.50th=[ 190], 99.90th=[ 206], 99.95th=[ 262], 00:11:34.628 | 99.99th=[ 289] 00:11:34.628 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:11:34.628 slat (usec): min=18, max=108, avg=24.89, stdev= 6.62 00:11:34.628 clat (usec): min=86, max=198, avg=102.23, stdev= 8.75 00:11:34.628 lat (usec): min=107, max=306, avg=127.12, stdev=12.23 00:11:34.628 clat percentiles (usec): 00:11:34.628 | 1.00th=[ 91], 5.00th=[ 93], 10.00th=[ 94], 20.00th=[ 96], 00:11:34.628 | 30.00th=[ 98], 40.00th=[ 99], 50.00th=[ 100], 60.00th=[ 102], 00:11:34.628 | 70.00th=[ 104], 80.00th=[ 108], 90.00th=[ 113], 95.00th=[ 119], 00:11:34.628 | 99.00th=[ 137], 99.50th=[ 143], 99.90th=[ 155], 99.95th=[ 161], 00:11:34.628 | 99.99th=[ 198] 00:11:34.628 bw ( KiB/s): min=15552, max=15552, per=100.00%, avg=15552.00, stdev= 0.00, samples=1 00:11:34.628 iops : min= 3888, max= 3888, avg=3888.00, stdev= 0.00, samples=1 00:11:34.628 lat (usec) : 100=25.13%, 250=74.84%, 500=0.03% 00:11:34.628 cpu : usr=3.80%, sys=9.70%, ctx=6892, majf=0, minf=2 00:11:34.628 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:34.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.628 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.628 issued rwts: total=3308,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:34.628 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:34.628 00:11:34.628 Run status group 0 (all jobs): 00:11:34.628 READ: bw=12.9MiB/s (13.5MB/s), 12.9MiB/s-12.9MiB/s (13.5MB/s-13.5MB/s), io=12.9MiB (13.5MB), run=1001-1001msec 00:11:34.628 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:11:34.628 00:11:34.628 Disk stats (read/write): 00:11:34.628 nvme0n1: ios=3116/3072, merge=0/0, ticks=497/354, in_queue=851, util=91.48% 00:11:34.628 15:57:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:34.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:34.628 15:57:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:34.628 15:57:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:11:34.628 15:57:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:34.628 15:57:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:34.628 15:57:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:34.628 15:57:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:34.628 15:57:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:11:34.628 15:57:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:34.628 15:57:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:34.628 15:57:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:34.628 15:57:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:11:34.628 15:57:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:34.628 15:57:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:11:34.628 15:57:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:34.628 15:57:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:34.628 rmmod nvme_tcp 00:11:34.628 rmmod nvme_fabrics 00:11:34.628 rmmod nvme_keyring 00:11:34.628 15:57:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:34.628 15:57:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:11:34.628 15:57:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:11:34.628 15:57:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 76876 ']' 00:11:34.628 15:57:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 76876 00:11:34.628 15:57:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 76876 ']' 00:11:34.628 15:57:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 76876 00:11:34.628 15:57:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:11:34.628 15:57:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:34.628 15:57:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76876 00:11:34.628 15:57:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:34.628 15:57:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:34.628 killing process with pid 76876 00:11:34.628 15:57:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76876' 00:11:34.628 15:57:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 76876 00:11:34.628 15:57:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 76876 00:11:34.886 15:57:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:34.886 15:57:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:34.886 15:57:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:34.886 15:57:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:34.886 15:57:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:34.886 15:57:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.886 15:57:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:34.886 15:57:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.886 15:57:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:34.886 00:11:34.886 real 0m5.826s 00:11:34.886 user 0m19.447s 00:11:34.886 sys 0m1.467s 00:11:34.886 15:57:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:34.886 15:57:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:34.886 ************************************ 00:11:34.886 END TEST nvmf_nmic 00:11:34.886 ************************************ 00:11:34.886 15:57:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:34.886 15:57:28 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:34.886 15:57:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:34.886 15:57:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:34.886 15:57:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:34.886 ************************************ 00:11:34.886 START TEST nvmf_fio_target 00:11:34.886 ************************************ 00:11:34.886 15:57:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:35.145 * Looking for test storage... 00:11:35.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:35.145 Cannot find device "nvmf_tgt_br" 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:35.145 Cannot find device "nvmf_tgt_br2" 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:35.145 Cannot find device "nvmf_tgt_br" 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:35.145 Cannot find device "nvmf_tgt_br2" 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:35.145 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:35.145 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:35.145 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:35.146 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:35.146 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:35.146 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:35.146 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:35.405 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:35.405 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:35.405 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:35.405 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:35.405 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:35.405 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:35.405 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:35.405 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:35.405 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:35.405 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:35.405 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:35.405 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:35.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:35.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:11:35.405 00:11:35.405 --- 10.0.0.2 ping statistics --- 00:11:35.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.405 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:11:35.405 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:35.405 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:35.405 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:11:35.405 00:11:35.405 --- 10.0.0.3 ping statistics --- 00:11:35.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.405 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:11:35.405 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:35.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:35.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:11:35.405 00:11:35.405 --- 10.0.0.1 ping statistics --- 00:11:35.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.405 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:11:35.405 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:35.405 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:11:35.405 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:35.405 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:35.405 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:35.405 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:35.405 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:35.405 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:35.405 15:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:35.405 15:57:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:35.405 15:57:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:35.405 15:57:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:35.405 15:57:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.405 15:57:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=77171 00:11:35.405 15:57:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:35.405 15:57:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 77171 00:11:35.405 15:57:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 77171 ']' 00:11:35.405 15:57:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.405 15:57:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:35.405 15:57:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.405 15:57:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:35.405 15:57:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.405 [2024-07-15 15:57:29.065693] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:11:35.405 [2024-07-15 15:57:29.065799] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.664 [2024-07-15 15:57:29.206577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:35.664 [2024-07-15 15:57:29.334462] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:35.664 [2024-07-15 15:57:29.334531] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:35.664 [2024-07-15 15:57:29.334546] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:35.664 [2024-07-15 15:57:29.334556] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:35.664 [2024-07-15 15:57:29.334566] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:35.664 [2024-07-15 15:57:29.334738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.664 [2024-07-15 15:57:29.335569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:35.664 [2024-07-15 15:57:29.335688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:35.664 [2024-07-15 15:57:29.335697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.599 15:57:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:36.599 15:57:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:11:36.599 15:57:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:36.599 15:57:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:36.599 15:57:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.599 15:57:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:36.599 15:57:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:36.857 [2024-07-15 15:57:30.404717] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:36.858 15:57:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:37.116 15:57:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:37.116 15:57:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:37.374 15:57:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:37.374 15:57:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:37.631 15:57:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:37.631 15:57:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:37.889 15:57:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:37.889 15:57:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:38.147 15:57:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:38.712 15:57:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:38.712 15:57:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:38.971 15:57:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:38.971 15:57:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:39.230 15:57:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:39.230 15:57:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:39.488 15:57:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:39.745 15:57:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:39.745 15:57:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:40.003 15:57:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:40.003 15:57:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:40.261 15:57:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:40.521 [2024-07-15 15:57:34.059458] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:40.521 15:57:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:40.780 15:57:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:41.038 15:57:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid=a185c444-aaeb-4d13-aa60-df1b0266600d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:41.038 15:57:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:41.038 15:57:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:11:41.038 15:57:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:41.038 15:57:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:11:41.038 15:57:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:11:41.038 15:57:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:11:43.568 15:57:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:43.568 15:57:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:43.568 15:57:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:43.568 15:57:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:11:43.568 15:57:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:43.568 15:57:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:11:43.568 15:57:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:43.568 [global] 00:11:43.568 thread=1 00:11:43.568 invalidate=1 00:11:43.568 rw=write 00:11:43.568 time_based=1 00:11:43.568 runtime=1 00:11:43.568 ioengine=libaio 00:11:43.568 direct=1 00:11:43.568 bs=4096 00:11:43.568 iodepth=1 00:11:43.568 norandommap=0 00:11:43.568 numjobs=1 00:11:43.568 00:11:43.568 verify_dump=1 00:11:43.568 verify_backlog=512 00:11:43.568 verify_state_save=0 00:11:43.568 do_verify=1 00:11:43.568 verify=crc32c-intel 00:11:43.568 [job0] 00:11:43.568 filename=/dev/nvme0n1 00:11:43.568 [job1] 00:11:43.568 filename=/dev/nvme0n2 00:11:43.568 [job2] 00:11:43.568 filename=/dev/nvme0n3 00:11:43.568 [job3] 00:11:43.568 filename=/dev/nvme0n4 00:11:43.568 Could not set queue depth (nvme0n1) 00:11:43.568 Could not set queue depth (nvme0n2) 00:11:43.568 Could not set queue depth (nvme0n3) 00:11:43.568 Could not set queue depth (nvme0n4) 00:11:43.568 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:43.568 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:43.568 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:43.568 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:43.568 fio-3.35 00:11:43.568 Starting 4 threads 00:11:44.502 00:11:44.502 job0: (groupid=0, jobs=1): err= 0: pid=77466: Mon Jul 15 15:57:38 2024 00:11:44.502 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:44.502 slat (nsec): min=13806, max=53935, avg=20300.39, stdev=6261.78 00:11:44.502 clat (usec): min=136, max=605, avg=178.46, stdev=33.22 00:11:44.502 lat (usec): min=152, max=623, avg=198.76, stdev=33.19 00:11:44.502 clat percentiles (usec): 00:11:44.502 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:11:44.502 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 174], 00:11:44.502 | 70.00th=[ 182], 80.00th=[ 196], 90.00th=[ 221], 95.00th=[ 253], 00:11:44.502 | 99.00th=[ 293], 99.50th=[ 318], 99.90th=[ 330], 99.95th=[ 347], 00:11:44.502 | 99.99th=[ 603] 00:11:44.502 write: IOPS=2949, BW=11.5MiB/s (12.1MB/s)(11.5MiB/1001msec); 0 zone resets 00:11:44.502 slat (usec): min=19, max=128, avg=30.28, stdev= 8.87 00:11:44.502 clat (usec): min=99, max=893, avg=131.80, stdev=26.88 00:11:44.502 lat (usec): min=124, max=914, avg=162.07, stdev=28.23 00:11:44.502 clat percentiles (usec): 00:11:44.502 | 1.00th=[ 105], 5.00th=[ 112], 10.00th=[ 115], 20.00th=[ 118], 00:11:44.502 | 30.00th=[ 121], 40.00th=[ 124], 50.00th=[ 126], 60.00th=[ 130], 00:11:44.502 | 70.00th=[ 135], 80.00th=[ 141], 90.00th=[ 155], 95.00th=[ 172], 00:11:44.502 | 99.00th=[ 221], 99.50th=[ 243], 99.90th=[ 379], 99.95th=[ 445], 00:11:44.502 | 99.99th=[ 898] 00:11:44.502 bw ( KiB/s): min=12288, max=12288, per=32.06%, avg=12288.00, stdev= 0.00, samples=1 00:11:44.502 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:44.502 lat (usec) : 100=0.04%, 250=97.30%, 500=2.63%, 750=0.02%, 1000=0.02% 00:11:44.502 cpu : usr=3.50%, sys=9.80%, ctx=5514, majf=0, minf=13 00:11:44.502 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:44.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:44.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:44.502 issued rwts: total=2560,2952,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:44.502 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:44.502 job1: (groupid=0, jobs=1): err= 0: pid=77467: Mon Jul 15 15:57:38 2024 00:11:44.502 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:44.502 slat (nsec): min=15426, max=55538, avg=27401.10, stdev=5352.67 00:11:44.502 clat (usec): min=184, max=973, avg=316.95, stdev=66.75 00:11:44.502 lat (usec): min=210, max=1017, avg=344.35, stdev=65.96 00:11:44.502 clat percentiles (usec): 00:11:44.502 | 1.00th=[ 235], 5.00th=[ 262], 10.00th=[ 269], 20.00th=[ 273], 00:11:44.502 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 302], 00:11:44.502 | 70.00th=[ 318], 80.00th=[ 375], 90.00th=[ 400], 95.00th=[ 429], 00:11:44.502 | 99.00th=[ 603], 99.50th=[ 652], 99.90th=[ 750], 99.95th=[ 971], 00:11:44.502 | 99.99th=[ 971] 00:11:44.502 write: IOPS=1797, BW=7189KiB/s (7361kB/s)(7196KiB/1001msec); 0 zone resets 00:11:44.502 slat (usec): min=20, max=128, avg=36.58, stdev= 8.68 00:11:44.502 clat (usec): min=130, max=389, avg=219.48, stdev=27.42 00:11:44.502 lat (usec): min=168, max=451, avg=256.06, stdev=27.13 00:11:44.502 clat percentiles (usec): 00:11:44.502 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 200], 00:11:44.502 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 221], 00:11:44.502 | 70.00th=[ 227], 80.00th=[ 233], 90.00th=[ 251], 95.00th=[ 273], 00:11:44.502 | 99.00th=[ 322], 99.50th=[ 343], 99.90th=[ 363], 99.95th=[ 392], 00:11:44.502 | 99.99th=[ 392] 00:11:44.502 bw ( KiB/s): min= 8192, max= 8192, per=21.37%, avg=8192.00, stdev= 0.00, samples=1 00:11:44.502 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:44.502 lat (usec) : 250=49.03%, 500=50.19%, 750=0.75%, 1000=0.03% 00:11:44.502 cpu : usr=1.60%, sys=8.70%, ctx=3335, majf=0, minf=7 00:11:44.502 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:44.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:44.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:44.502 issued rwts: total=1536,1799,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:44.502 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:44.502 job2: (groupid=0, jobs=1): err= 0: pid=77468: Mon Jul 15 15:57:38 2024 00:11:44.502 read: IOPS=2664, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1001msec) 00:11:44.502 slat (nsec): min=13379, max=52916, avg=17554.59, stdev=4935.11 00:11:44.503 clat (usec): min=145, max=573, avg=170.27, stdev=16.07 00:11:44.503 lat (usec): min=159, max=587, avg=187.83, stdev=17.24 00:11:44.503 clat percentiles (usec): 00:11:44.503 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:11:44.503 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 172], 00:11:44.503 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 184], 95.00th=[ 190], 00:11:44.503 | 99.00th=[ 202], 99.50th=[ 208], 99.90th=[ 453], 99.95th=[ 523], 00:11:44.503 | 99.99th=[ 570] 00:11:44.503 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:44.503 slat (usec): min=19, max=110, avg=26.54, stdev= 8.75 00:11:44.503 clat (usec): min=102, max=1543, avg=132.45, stdev=28.52 00:11:44.503 lat (usec): min=128, max=1563, avg=158.99, stdev=30.27 00:11:44.503 clat percentiles (usec): 00:11:44.503 | 1.00th=[ 113], 5.00th=[ 118], 10.00th=[ 120], 20.00th=[ 123], 00:11:44.503 | 30.00th=[ 126], 40.00th=[ 128], 50.00th=[ 131], 60.00th=[ 133], 00:11:44.503 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 147], 95.00th=[ 153], 00:11:44.503 | 99.00th=[ 176], 99.50th=[ 182], 99.90th=[ 247], 99.95th=[ 310], 00:11:44.503 | 99.99th=[ 1549] 00:11:44.503 bw ( KiB/s): min=12288, max=12288, per=32.06%, avg=12288.00, stdev= 0.00, samples=1 00:11:44.503 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:44.503 lat (usec) : 250=99.86%, 500=0.09%, 750=0.03% 00:11:44.503 lat (msec) : 2=0.02% 00:11:44.503 cpu : usr=2.50%, sys=9.40%, ctx=5752, majf=0, minf=6 00:11:44.503 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:44.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:44.503 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:44.503 issued rwts: total=2667,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:44.503 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:44.503 job3: (groupid=0, jobs=1): err= 0: pid=77469: Mon Jul 15 15:57:38 2024 00:11:44.503 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:44.503 slat (nsec): min=13679, max=96588, avg=23438.16, stdev=7405.01 00:11:44.503 clat (usec): min=221, max=3176, avg=327.06, stdev=108.48 00:11:44.503 lat (usec): min=257, max=3213, avg=350.50, stdev=109.93 00:11:44.503 clat percentiles (usec): 00:11:44.503 | 1.00th=[ 260], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 285], 00:11:44.503 | 30.00th=[ 289], 40.00th=[ 293], 50.00th=[ 302], 60.00th=[ 306], 00:11:44.503 | 70.00th=[ 326], 80.00th=[ 371], 90.00th=[ 396], 95.00th=[ 441], 00:11:44.503 | 99.00th=[ 603], 99.50th=[ 660], 99.90th=[ 1778], 99.95th=[ 3163], 00:11:44.503 | 99.99th=[ 3163] 00:11:44.503 write: IOPS=1766, BW=7065KiB/s (7234kB/s)(7072KiB/1001msec); 0 zone resets 00:11:44.503 slat (usec): min=19, max=155, avg=32.15, stdev= 9.94 00:11:44.503 clat (usec): min=112, max=401, avg=224.00, stdev=27.71 00:11:44.503 lat (usec): min=150, max=478, avg=256.15, stdev=27.29 00:11:44.503 clat percentiles (usec): 00:11:44.503 | 1.00th=[ 178], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 206], 00:11:44.503 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 227], 00:11:44.503 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 251], 95.00th=[ 273], 00:11:44.503 | 99.00th=[ 338], 99.50th=[ 359], 99.90th=[ 388], 99.95th=[ 404], 00:11:44.503 | 99.99th=[ 404] 00:11:44.503 bw ( KiB/s): min= 8192, max= 8192, per=21.37%, avg=8192.00, stdev= 0.00, samples=1 00:11:44.503 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:44.503 lat (usec) : 250=48.06%, 500=51.03%, 750=0.73%, 1000=0.03% 00:11:44.503 lat (msec) : 2=0.12%, 4=0.03% 00:11:44.503 cpu : usr=2.10%, sys=6.80%, ctx=3306, majf=0, minf=9 00:11:44.503 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:44.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:44.503 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:44.503 issued rwts: total=1536,1768,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:44.503 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:44.503 00:11:44.503 Run status group 0 (all jobs): 00:11:44.503 READ: bw=32.4MiB/s (34.0MB/s), 6138KiB/s-10.4MiB/s (6285kB/s-10.9MB/s), io=32.4MiB (34.0MB), run=1001-1001msec 00:11:44.503 WRITE: bw=37.4MiB/s (39.2MB/s), 7065KiB/s-12.0MiB/s (7234kB/s-12.6MB/s), io=37.5MiB (39.3MB), run=1001-1001msec 00:11:44.503 00:11:44.503 Disk stats (read/write): 00:11:44.503 nvme0n1: ios=2142/2560, merge=0/0, ticks=408/350, in_queue=758, util=86.67% 00:11:44.503 nvme0n2: ios=1331/1536, merge=0/0, ticks=439/371, in_queue=810, util=87.21% 00:11:44.503 nvme0n3: ios=2278/2560, merge=0/0, ticks=388/366, in_queue=754, util=88.99% 00:11:44.503 nvme0n4: ios=1270/1536, merge=0/0, ticks=424/352, in_queue=776, util=89.35% 00:11:44.503 15:57:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:44.503 [global] 00:11:44.503 thread=1 00:11:44.503 invalidate=1 00:11:44.503 rw=randwrite 00:11:44.503 time_based=1 00:11:44.503 runtime=1 00:11:44.503 ioengine=libaio 00:11:44.503 direct=1 00:11:44.503 bs=4096 00:11:44.503 iodepth=1 00:11:44.503 norandommap=0 00:11:44.503 numjobs=1 00:11:44.503 00:11:44.503 verify_dump=1 00:11:44.503 verify_backlog=512 00:11:44.503 verify_state_save=0 00:11:44.503 do_verify=1 00:11:44.503 verify=crc32c-intel 00:11:44.503 [job0] 00:11:44.503 filename=/dev/nvme0n1 00:11:44.503 [job1] 00:11:44.503 filename=/dev/nvme0n2 00:11:44.503 [job2] 00:11:44.503 filename=/dev/nvme0n3 00:11:44.503 [job3] 00:11:44.503 filename=/dev/nvme0n4 00:11:44.503 Could not set queue depth (nvme0n1) 00:11:44.503 Could not set queue depth (nvme0n2) 00:11:44.503 Could not set queue depth (nvme0n3) 00:11:44.503 Could not set queue depth (nvme0n4) 00:11:44.761 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:44.761 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:44.761 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:44.761 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:44.761 fio-3.35 00:11:44.761 Starting 4 threads 00:11:46.153 00:11:46.153 job0: (groupid=0, jobs=1): err= 0: pid=77522: Mon Jul 15 15:57:39 2024 00:11:46.153 read: IOPS=2671, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1001msec) 00:11:46.153 slat (nsec): min=13915, max=48085, avg=18724.69, stdev=3926.36 00:11:46.153 clat (usec): min=141, max=7698, avg=171.91, stdev=155.04 00:11:46.153 lat (usec): min=155, max=7716, avg=190.63, stdev=155.15 00:11:46.153 clat percentiles (usec): 00:11:46.153 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 159], 00:11:46.153 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 169], 00:11:46.153 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 188], 00:11:46.153 | 99.00th=[ 200], 99.50th=[ 217], 99.90th=[ 1713], 99.95th=[ 2180], 00:11:46.153 | 99.99th=[ 7701] 00:11:46.153 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:46.153 slat (usec): min=20, max=143, avg=27.18, stdev= 7.60 00:11:46.153 clat (usec): min=103, max=2915, avg=128.38, stdev=57.69 00:11:46.153 lat (usec): min=126, max=2942, avg=155.56, stdev=58.56 00:11:46.153 clat percentiles (usec): 00:11:46.153 | 1.00th=[ 110], 5.00th=[ 114], 10.00th=[ 116], 20.00th=[ 119], 00:11:46.153 | 30.00th=[ 121], 40.00th=[ 124], 50.00th=[ 126], 60.00th=[ 128], 00:11:46.153 | 70.00th=[ 131], 80.00th=[ 135], 90.00th=[ 141], 95.00th=[ 147], 00:11:46.153 | 99.00th=[ 161], 99.50th=[ 169], 99.90th=[ 249], 99.95th=[ 1516], 00:11:46.153 | 99.99th=[ 2900] 00:11:46.153 bw ( KiB/s): min=12239, max=12239, per=29.91%, avg=12239.00, stdev= 0.00, samples=1 00:11:46.153 iops : min= 3059, max= 3059, avg=3059.00, stdev= 0.00, samples=1 00:11:46.153 lat (usec) : 250=99.77%, 500=0.05%, 750=0.09% 00:11:46.153 lat (msec) : 2=0.03%, 4=0.03%, 10=0.02% 00:11:46.153 cpu : usr=2.70%, sys=9.80%, ctx=5746, majf=0, minf=13 00:11:46.153 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:46.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.153 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.153 issued rwts: total=2674,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.153 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:46.153 job1: (groupid=0, jobs=1): err= 0: pid=77523: Mon Jul 15 15:57:39 2024 00:11:46.153 read: IOPS=2895, BW=11.3MiB/s (11.9MB/s)(11.3MiB/1001msec) 00:11:46.153 slat (nsec): min=13514, max=69294, avg=16469.04, stdev=3826.18 00:11:46.153 clat (usec): min=134, max=1663, avg=165.94, stdev=40.08 00:11:46.153 lat (usec): min=154, max=1687, avg=182.41, stdev=40.49 00:11:46.153 clat percentiles (usec): 00:11:46.153 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 155], 00:11:46.153 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 165], 00:11:46.153 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 178], 95.00th=[ 184], 00:11:46.153 | 99.00th=[ 210], 99.50th=[ 231], 99.90th=[ 848], 99.95th=[ 881], 00:11:46.153 | 99.99th=[ 1663] 00:11:46.153 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:46.153 slat (usec): min=19, max=161, avg=24.11, stdev= 8.02 00:11:46.153 clat (usec): min=92, max=597, avg=125.59, stdev=20.06 00:11:46.153 lat (usec): min=122, max=622, avg=149.70, stdev=22.52 00:11:46.153 clat percentiles (usec): 00:11:46.153 | 1.00th=[ 106], 5.00th=[ 110], 10.00th=[ 113], 20.00th=[ 116], 00:11:46.153 | 30.00th=[ 119], 40.00th=[ 121], 50.00th=[ 124], 60.00th=[ 126], 00:11:46.153 | 70.00th=[ 129], 80.00th=[ 133], 90.00th=[ 139], 95.00th=[ 149], 00:11:46.153 | 99.00th=[ 174], 99.50th=[ 192], 99.90th=[ 379], 99.95th=[ 570], 00:11:46.153 | 99.99th=[ 594] 00:11:46.153 bw ( KiB/s): min=12263, max=12263, per=29.97%, avg=12263.00, stdev= 0.00, samples=1 00:11:46.153 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:11:46.153 lat (usec) : 100=0.02%, 250=99.63%, 500=0.22%, 750=0.07%, 1000=0.05% 00:11:46.153 lat (msec) : 2=0.02% 00:11:46.153 cpu : usr=2.40%, sys=8.90%, ctx=5971, majf=0, minf=5 00:11:46.153 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:46.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.153 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.153 issued rwts: total=2898,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.153 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:46.153 job2: (groupid=0, jobs=1): err= 0: pid=77524: Mon Jul 15 15:57:39 2024 00:11:46.153 read: IOPS=1913, BW=7652KiB/s (7836kB/s)(7660KiB/1001msec) 00:11:46.153 slat (nsec): min=11428, max=38853, avg=16514.89, stdev=4236.09 00:11:46.153 clat (usec): min=155, max=465, avg=257.41, stdev=57.19 00:11:46.153 lat (usec): min=176, max=480, avg=273.93, stdev=54.39 00:11:46.153 clat percentiles (usec): 00:11:46.153 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 178], 00:11:46.153 | 30.00th=[ 260], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 281], 00:11:46.153 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 334], 00:11:46.153 | 99.00th=[ 396], 99.50th=[ 416], 99.90th=[ 465], 99.95th=[ 465], 00:11:46.153 | 99.99th=[ 465] 00:11:46.153 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:46.153 slat (usec): min=11, max=143, avg=26.51, stdev= 7.90 00:11:46.153 clat (usec): min=116, max=495, avg=201.80, stdev=52.84 00:11:46.153 lat (usec): min=140, max=515, avg=228.31, stdev=50.13 00:11:46.153 clat percentiles (usec): 00:11:46.153 | 1.00th=[ 124], 5.00th=[ 129], 10.00th=[ 133], 20.00th=[ 139], 00:11:46.153 | 30.00th=[ 153], 40.00th=[ 206], 50.00th=[ 215], 60.00th=[ 223], 00:11:46.153 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 258], 95.00th=[ 281], 00:11:46.153 | 99.00th=[ 334], 99.50th=[ 359], 99.90th=[ 465], 99.95th=[ 494], 00:11:46.153 | 99.99th=[ 494] 00:11:46.153 bw ( KiB/s): min= 9147, max= 9147, per=22.35%, avg=9147.00, stdev= 0.00, samples=1 00:11:46.153 iops : min= 2286, max= 2286, avg=2286.00, stdev= 0.00, samples=1 00:11:46.153 lat (usec) : 250=58.97%, 500=41.03% 00:11:46.153 cpu : usr=1.60%, sys=6.70%, ctx=3963, majf=0, minf=14 00:11:46.153 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:46.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.153 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.153 issued rwts: total=1915,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.153 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:46.153 job3: (groupid=0, jobs=1): err= 0: pid=77525: Mon Jul 15 15:57:39 2024 00:11:46.153 read: IOPS=1842, BW=7369KiB/s (7545kB/s)(7376KiB/1001msec) 00:11:46.153 slat (nsec): min=11675, max=42880, avg=16750.27, stdev=4244.93 00:11:46.153 clat (usec): min=154, max=564, avg=256.57, stdev=59.33 00:11:46.153 lat (usec): min=177, max=578, avg=273.32, stdev=56.43 00:11:46.153 clat percentiles (usec): 00:11:46.153 | 1.00th=[ 161], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 176], 00:11:46.153 | 30.00th=[ 260], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 281], 00:11:46.153 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 334], 00:11:46.153 | 99.00th=[ 408], 99.50th=[ 429], 99.90th=[ 523], 99.95th=[ 562], 00:11:46.153 | 99.99th=[ 562] 00:11:46.153 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:46.153 slat (usec): min=11, max=604, avg=27.74, stdev=18.18 00:11:46.153 clat (usec): min=5, max=3303, avg=210.64, stdev=98.18 00:11:46.153 lat (usec): min=144, max=3398, avg=238.38, stdev=98.56 00:11:46.153 clat percentiles (usec): 00:11:46.153 | 1.00th=[ 124], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 143], 00:11:46.153 | 30.00th=[ 190], 40.00th=[ 210], 50.00th=[ 219], 60.00th=[ 225], 00:11:46.153 | 70.00th=[ 233], 80.00th=[ 243], 90.00th=[ 265], 95.00th=[ 289], 00:11:46.153 | 99.00th=[ 347], 99.50th=[ 469], 99.90th=[ 865], 99.95th=[ 1958], 00:11:46.153 | 99.99th=[ 3294] 00:11:46.153 bw ( KiB/s): min= 8399, max= 8399, per=20.53%, avg=8399.00, stdev= 0.00, samples=1 00:11:46.153 iops : min= 2099, max= 2099, avg=2099.00, stdev= 0.00, samples=1 00:11:46.153 lat (usec) : 10=0.03%, 50=0.03%, 250=58.04%, 500=41.62%, 750=0.18% 00:11:46.153 lat (usec) : 1000=0.05% 00:11:46.153 lat (msec) : 2=0.03%, 4=0.03% 00:11:46.153 cpu : usr=2.30%, sys=6.00%, ctx=3900, majf=0, minf=13 00:11:46.153 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:46.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.154 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.154 issued rwts: total=1844,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.154 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:46.154 00:11:46.154 Run status group 0 (all jobs): 00:11:46.154 READ: bw=36.4MiB/s (38.2MB/s), 7369KiB/s-11.3MiB/s (7545kB/s-11.9MB/s), io=36.4MiB (38.2MB), run=1001-1001msec 00:11:46.154 WRITE: bw=40.0MiB/s (41.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=40.0MiB (41.9MB), run=1001-1001msec 00:11:46.154 00:11:46.154 Disk stats (read/write): 00:11:46.154 nvme0n1: ios=2370/2560, merge=0/0, ticks=443/354, in_queue=797, util=87.37% 00:11:46.154 nvme0n2: ios=2579/2560, merge=0/0, ticks=456/341, in_queue=797, util=88.40% 00:11:46.154 nvme0n3: ios=1536/1944, merge=0/0, ticks=386/408, in_queue=794, util=89.32% 00:11:46.154 nvme0n4: ios=1536/1849, merge=0/0, ticks=383/399, in_queue=782, util=89.47% 00:11:46.154 15:57:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:46.154 [global] 00:11:46.154 thread=1 00:11:46.154 invalidate=1 00:11:46.154 rw=write 00:11:46.154 time_based=1 00:11:46.154 runtime=1 00:11:46.154 ioengine=libaio 00:11:46.154 direct=1 00:11:46.154 bs=4096 00:11:46.154 iodepth=128 00:11:46.154 norandommap=0 00:11:46.154 numjobs=1 00:11:46.154 00:11:46.154 verify_dump=1 00:11:46.154 verify_backlog=512 00:11:46.154 verify_state_save=0 00:11:46.154 do_verify=1 00:11:46.154 verify=crc32c-intel 00:11:46.154 [job0] 00:11:46.154 filename=/dev/nvme0n1 00:11:46.154 [job1] 00:11:46.154 filename=/dev/nvme0n2 00:11:46.154 [job2] 00:11:46.154 filename=/dev/nvme0n3 00:11:46.154 [job3] 00:11:46.154 filename=/dev/nvme0n4 00:11:46.154 Could not set queue depth (nvme0n1) 00:11:46.154 Could not set queue depth (nvme0n2) 00:11:46.154 Could not set queue depth (nvme0n3) 00:11:46.154 Could not set queue depth (nvme0n4) 00:11:46.154 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:46.154 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:46.154 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:46.154 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:46.154 fio-3.35 00:11:46.154 Starting 4 threads 00:11:47.527 00:11:47.527 job0: (groupid=0, jobs=1): err= 0: pid=77579: Mon Jul 15 15:57:40 2024 00:11:47.527 read: IOPS=3645, BW=14.2MiB/s (14.9MB/s)(14.4MiB/1009msec) 00:11:47.527 slat (usec): min=3, max=19673, avg=144.12, stdev=1003.05 00:11:47.527 clat (usec): min=5374, max=41601, avg=17848.41, stdev=6738.25 00:11:47.527 lat (usec): min=5385, max=41609, avg=17992.53, stdev=6802.57 00:11:47.527 clat percentiles (usec): 00:11:47.527 | 1.00th=[ 8717], 5.00th=[ 9765], 10.00th=[10683], 20.00th=[12125], 00:11:47.527 | 30.00th=[12649], 40.00th=[14615], 50.00th=[17171], 60.00th=[18744], 00:11:47.527 | 70.00th=[20579], 80.00th=[21627], 90.00th=[25560], 95.00th=[32900], 00:11:47.527 | 99.00th=[38536], 99.50th=[39584], 99.90th=[41681], 99.95th=[41681], 00:11:47.527 | 99.99th=[41681] 00:11:47.527 write: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec); 0 zone resets 00:11:47.527 slat (usec): min=4, max=8666, avg=106.56, stdev=392.57 00:11:47.527 clat (usec): min=2066, max=41589, avg=15204.67, stdev=5131.91 00:11:47.527 lat (usec): min=2077, max=41598, avg=15311.23, stdev=5167.25 00:11:47.527 clat percentiles (usec): 00:11:47.527 | 1.00th=[ 5538], 5.00th=[ 7046], 10.00th=[ 9765], 20.00th=[11207], 00:11:47.528 | 30.00th=[11731], 40.00th=[12387], 50.00th=[13566], 60.00th=[15926], 00:11:47.528 | 70.00th=[19268], 80.00th=[21365], 90.00th=[22152], 95.00th=[22676], 00:11:47.528 | 99.00th=[23462], 99.50th=[23725], 99.90th=[39584], 99.95th=[39584], 00:11:47.528 | 99.99th=[41681] 00:11:47.528 bw ( KiB/s): min=15816, max=16688, per=28.60%, avg=16252.00, stdev=616.60, samples=2 00:11:47.528 iops : min= 3954, max= 4172, avg=4063.00, stdev=154.15, samples=2 00:11:47.528 lat (msec) : 4=0.09%, 10=8.72%, 20=59.53%, 50=31.66% 00:11:47.528 cpu : usr=3.67%, sys=10.71%, ctx=558, majf=0, minf=9 00:11:47.528 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:47.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:47.528 issued rwts: total=3678,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:47.528 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:47.528 job1: (groupid=0, jobs=1): err= 0: pid=77580: Mon Jul 15 15:57:40 2024 00:11:47.528 read: IOPS=2939, BW=11.5MiB/s (12.0MB/s)(11.5MiB/1002msec) 00:11:47.528 slat (usec): min=3, max=11351, avg=175.41, stdev=912.94 00:11:47.528 clat (usec): min=1042, max=42460, avg=22395.70, stdev=9646.01 00:11:47.528 lat (usec): min=5655, max=42475, avg=22571.11, stdev=9692.05 00:11:47.528 clat percentiles (usec): 00:11:47.528 | 1.00th=[ 6259], 5.00th=[11600], 10.00th=[12256], 20.00th=[12911], 00:11:47.528 | 30.00th=[13304], 40.00th=[14222], 50.00th=[23987], 60.00th=[26870], 00:11:47.528 | 70.00th=[29230], 80.00th=[32375], 90.00th=[34866], 95.00th=[38536], 00:11:47.528 | 99.00th=[40633], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:11:47.528 | 99.99th=[42206] 00:11:47.528 write: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec); 0 zone resets 00:11:47.528 slat (usec): min=4, max=8067, avg=148.61, stdev=656.93 00:11:47.528 clat (usec): min=9827, max=37571, avg=19749.52, stdev=7872.80 00:11:47.528 lat (usec): min=9858, max=37598, avg=19898.14, stdev=7917.48 00:11:47.528 clat percentiles (usec): 00:11:47.528 | 1.00th=[10290], 5.00th=[10814], 10.00th=[11207], 20.00th=[11994], 00:11:47.528 | 30.00th=[13042], 40.00th=[13960], 50.00th=[15008], 60.00th=[22938], 00:11:47.528 | 70.00th=[25822], 80.00th=[28181], 90.00th=[30540], 95.00th=[32900], 00:11:47.528 | 99.00th=[35914], 99.50th=[37487], 99.90th=[37487], 99.95th=[37487], 00:11:47.528 | 99.99th=[37487] 00:11:47.528 bw ( KiB/s): min= 8192, max=16384, per=21.62%, avg=12288.00, stdev=5792.62, samples=2 00:11:47.528 iops : min= 2048, max= 4096, avg=3072.00, stdev=1448.15, samples=2 00:11:47.528 lat (msec) : 2=0.02%, 10=0.81%, 20=48.94%, 50=50.22% 00:11:47.528 cpu : usr=3.00%, sys=9.39%, ctx=598, majf=0, minf=8 00:11:47.528 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:47.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:47.528 issued rwts: total=2945,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:47.528 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:47.528 job2: (groupid=0, jobs=1): err= 0: pid=77581: Mon Jul 15 15:57:40 2024 00:11:47.528 read: IOPS=4020, BW=15.7MiB/s (16.5MB/s)(15.8MiB/1003msec) 00:11:47.528 slat (usec): min=4, max=4760, avg=119.99, stdev=568.64 00:11:47.528 clat (usec): min=277, max=21476, avg=15775.63, stdev=2511.18 00:11:47.528 lat (usec): min=3118, max=22447, avg=15895.62, stdev=2469.90 00:11:47.528 clat percentiles (usec): 00:11:47.528 | 1.00th=[ 6456], 5.00th=[12649], 10.00th=[13173], 20.00th=[13566], 00:11:47.528 | 30.00th=[14746], 40.00th=[15270], 50.00th=[16057], 60.00th=[16712], 00:11:47.528 | 70.00th=[17433], 80.00th=[17695], 90.00th=[18744], 95.00th=[19006], 00:11:47.528 | 99.00th=[19530], 99.50th=[19792], 99.90th=[20579], 99.95th=[20579], 00:11:47.528 | 99.99th=[21365] 00:11:47.528 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:11:47.528 slat (usec): min=11, max=4541, avg=117.30, stdev=498.84 00:11:47.528 clat (usec): min=10300, max=20130, avg=15348.75, stdev=2128.72 00:11:47.528 lat (usec): min=10340, max=20159, avg=15466.05, stdev=2127.96 00:11:47.528 clat percentiles (usec): 00:11:47.528 | 1.00th=[11076], 5.00th=[11469], 10.00th=[12518], 20.00th=[13435], 00:11:47.528 | 30.00th=[13960], 40.00th=[14484], 50.00th=[15401], 60.00th=[16188], 00:11:47.528 | 70.00th=[16909], 80.00th=[17433], 90.00th=[17957], 95.00th=[18482], 00:11:47.528 | 99.00th=[19268], 99.50th=[19530], 99.90th=[20055], 99.95th=[20055], 00:11:47.528 | 99.99th=[20055] 00:11:47.528 bw ( KiB/s): min=16384, max=16384, per=28.83%, avg=16384.00, stdev= 0.00, samples=2 00:11:47.528 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:11:47.528 lat (usec) : 500=0.01% 00:11:47.528 lat (msec) : 4=0.39%, 10=0.70%, 20=98.68%, 50=0.21% 00:11:47.528 cpu : usr=3.89%, sys=13.17%, ctx=434, majf=0, minf=9 00:11:47.528 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:47.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:47.528 issued rwts: total=4033,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:47.528 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:47.528 job3: (groupid=0, jobs=1): err= 0: pid=77582: Mon Jul 15 15:57:40 2024 00:11:47.528 read: IOPS=2682, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1002msec) 00:11:47.528 slat (usec): min=3, max=11462, avg=185.25, stdev=891.69 00:11:47.528 clat (usec): min=708, max=41516, avg=24247.76, stdev=9278.51 00:11:47.528 lat (usec): min=5994, max=42496, avg=24433.01, stdev=9317.37 00:11:47.528 clat percentiles (usec): 00:11:47.528 | 1.00th=[ 6390], 5.00th=[12518], 10.00th=[13304], 20.00th=[14091], 00:11:47.528 | 30.00th=[14484], 40.00th=[16450], 50.00th=[27657], 60.00th=[30016], 00:11:47.528 | 70.00th=[31589], 80.00th=[33424], 90.00th=[34866], 95.00th=[36963], 00:11:47.528 | 99.00th=[38536], 99.50th=[39584], 99.90th=[41681], 99.95th=[41681], 00:11:47.528 | 99.99th=[41681] 00:11:47.528 write: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec); 0 zone resets 00:11:47.528 slat (usec): min=12, max=9852, avg=155.71, stdev=684.99 00:11:47.528 clat (usec): min=10787, max=36221, avg=19946.80, stdev=6849.64 00:11:47.528 lat (usec): min=10822, max=36252, avg=20102.52, stdev=6896.64 00:11:47.528 clat percentiles (usec): 00:11:47.528 | 1.00th=[11994], 5.00th=[12256], 10.00th=[12387], 20.00th=[13960], 00:11:47.528 | 30.00th=[14484], 40.00th=[15270], 50.00th=[16712], 60.00th=[20841], 00:11:47.528 | 70.00th=[24249], 80.00th=[27657], 90.00th=[30278], 95.00th=[32375], 00:11:47.528 | 99.00th=[34866], 99.50th=[35390], 99.90th=[35914], 99.95th=[36439], 00:11:47.528 | 99.99th=[36439] 00:11:47.528 bw ( KiB/s): min= 9664, max=14912, per=21.62%, avg=12288.00, stdev=3710.90, samples=2 00:11:47.528 iops : min= 2416, max= 3728, avg=3072.00, stdev=927.72, samples=2 00:11:47.528 lat (usec) : 750=0.02% 00:11:47.528 lat (msec) : 10=0.56%, 20=48.96%, 50=50.47% 00:11:47.528 cpu : usr=2.60%, sys=9.19%, ctx=609, majf=0, minf=15 00:11:47.528 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:47.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:47.528 issued rwts: total=2688,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:47.528 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:47.528 00:11:47.528 Run status group 0 (all jobs): 00:11:47.528 READ: bw=51.7MiB/s (54.2MB/s), 10.5MiB/s-15.7MiB/s (11.0MB/s-16.5MB/s), io=52.1MiB (54.7MB), run=1002-1009msec 00:11:47.528 WRITE: bw=55.5MiB/s (58.2MB/s), 12.0MiB/s-16.0MiB/s (12.6MB/s-16.7MB/s), io=56.0MiB (58.7MB), run=1002-1009msec 00:11:47.528 00:11:47.528 Disk stats (read/write): 00:11:47.528 nvme0n1: ios=3122/3247, merge=0/0, ticks=54072/49307, in_queue=103379, util=88.28% 00:11:47.528 nvme0n2: ios=2584/2726, merge=0/0, ticks=13478/11261, in_queue=24739, util=87.91% 00:11:47.528 nvme0n3: ios=3132/3584, merge=0/0, ticks=12145/12496, in_queue=24641, util=89.09% 00:11:47.528 nvme0n4: ios=2394/2560, merge=0/0, ticks=13690/11159, in_queue=24849, util=89.34% 00:11:47.528 15:57:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:47.528 [global] 00:11:47.528 thread=1 00:11:47.528 invalidate=1 00:11:47.528 rw=randwrite 00:11:47.528 time_based=1 00:11:47.528 runtime=1 00:11:47.528 ioengine=libaio 00:11:47.528 direct=1 00:11:47.528 bs=4096 00:11:47.528 iodepth=128 00:11:47.528 norandommap=0 00:11:47.528 numjobs=1 00:11:47.528 00:11:47.528 verify_dump=1 00:11:47.528 verify_backlog=512 00:11:47.528 verify_state_save=0 00:11:47.528 do_verify=1 00:11:47.528 verify=crc32c-intel 00:11:47.528 [job0] 00:11:47.528 filename=/dev/nvme0n1 00:11:47.528 [job1] 00:11:47.528 filename=/dev/nvme0n2 00:11:47.528 [job2] 00:11:47.528 filename=/dev/nvme0n3 00:11:47.528 [job3] 00:11:47.528 filename=/dev/nvme0n4 00:11:47.528 Could not set queue depth (nvme0n1) 00:11:47.528 Could not set queue depth (nvme0n2) 00:11:47.528 Could not set queue depth (nvme0n3) 00:11:47.528 Could not set queue depth (nvme0n4) 00:11:47.528 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:47.528 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:47.528 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:47.528 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:47.528 fio-3.35 00:11:47.528 Starting 4 threads 00:11:48.899 00:11:48.899 job0: (groupid=0, jobs=1): err= 0: pid=77641: Mon Jul 15 15:57:42 2024 00:11:48.899 read: IOPS=5412, BW=21.1MiB/s (22.2MB/s)(21.2MiB/1004msec) 00:11:48.899 slat (usec): min=4, max=10303, avg=95.49, stdev=608.27 00:11:48.899 clat (usec): min=3078, max=22857, avg=12196.52, stdev=2995.26 00:11:48.899 lat (usec): min=3107, max=22876, avg=12292.02, stdev=3025.18 00:11:48.899 clat percentiles (usec): 00:11:48.899 | 1.00th=[ 5145], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[10159], 00:11:48.899 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11600], 60.00th=[12125], 00:11:48.899 | 70.00th=[12649], 80.00th=[14222], 90.00th=[16909], 95.00th=[18482], 00:11:48.899 | 99.00th=[20841], 99.50th=[21365], 99.90th=[22152], 99.95th=[22676], 00:11:48.899 | 99.99th=[22938] 00:11:48.899 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:11:48.899 slat (usec): min=4, max=8969, avg=78.22, stdev=391.99 00:11:48.899 clat (usec): min=3915, max=22729, avg=10800.61, stdev=2245.72 00:11:48.899 lat (usec): min=3937, max=22740, avg=10878.84, stdev=2280.15 00:11:48.899 clat percentiles (usec): 00:11:48.899 | 1.00th=[ 4490], 5.00th=[ 5735], 10.00th=[ 7177], 20.00th=[ 9634], 00:11:48.899 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11469], 60.00th=[11731], 00:11:48.899 | 70.00th=[11863], 80.00th=[12256], 90.00th=[12780], 95.00th=[13173], 00:11:48.899 | 99.00th=[15008], 99.50th=[16188], 99.90th=[21365], 99.95th=[21627], 00:11:48.899 | 99.99th=[22676] 00:11:48.899 bw ( KiB/s): min=21840, max=23262, per=33.79%, avg=22551.00, stdev=1005.51, samples=2 00:11:48.899 iops : min= 5460, max= 5815, avg=5637.50, stdev=251.02, samples=2 00:11:48.899 lat (msec) : 4=0.09%, 10=21.00%, 20=77.63%, 50=1.27% 00:11:48.899 cpu : usr=5.38%, sys=12.46%, ctx=734, majf=0, minf=3 00:11:48.899 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:48.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.899 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:48.899 issued rwts: total=5434,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:48.899 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:48.899 job1: (groupid=0, jobs=1): err= 0: pid=77642: Mon Jul 15 15:57:42 2024 00:11:48.899 read: IOPS=5576, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1010msec) 00:11:48.899 slat (usec): min=4, max=10335, avg=91.53, stdev=576.03 00:11:48.899 clat (usec): min=4141, max=22022, avg=11925.09, stdev=2771.85 00:11:48.899 lat (usec): min=4152, max=22047, avg=12016.62, stdev=2801.88 00:11:48.899 clat percentiles (usec): 00:11:48.899 | 1.00th=[ 5997], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[10028], 00:11:48.899 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11207], 60.00th=[11600], 00:11:48.899 | 70.00th=[12649], 80.00th=[13435], 90.00th=[15664], 95.00th=[18220], 00:11:48.899 | 99.00th=[20579], 99.50th=[21365], 99.90th=[21890], 99.95th=[21890], 00:11:48.899 | 99.99th=[22152] 00:11:48.899 write: IOPS=5708, BW=22.3MiB/s (23.4MB/s)(22.5MiB/1010msec); 0 zone resets 00:11:48.899 slat (usec): min=5, max=8980, avg=76.10, stdev=412.92 00:11:48.899 clat (usec): min=3725, max=21944, avg=10554.11, stdev=2130.39 00:11:48.899 lat (usec): min=3745, max=21960, avg=10630.20, stdev=2167.79 00:11:48.899 clat percentiles (usec): 00:11:48.899 | 1.00th=[ 4621], 5.00th=[ 5735], 10.00th=[ 6980], 20.00th=[ 9241], 00:11:48.899 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11338], 60.00th=[11600], 00:11:48.899 | 70.00th=[11863], 80.00th=[11994], 90.00th=[12256], 95.00th=[12387], 00:11:48.899 | 99.00th=[13960], 99.50th=[14746], 99.90th=[21103], 99.95th=[21890], 00:11:48.899 | 99.99th=[21890] 00:11:48.899 bw ( KiB/s): min=20552, max=24560, per=33.79%, avg=22556.00, stdev=2834.08, samples=2 00:11:48.899 iops : min= 5138, max= 6140, avg=5639.00, stdev=708.52, samples=2 00:11:48.899 lat (msec) : 4=0.15%, 10=23.61%, 20=75.10%, 50=1.14% 00:11:48.899 cpu : usr=5.15%, sys=14.47%, ctx=721, majf=0, minf=3 00:11:48.899 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:48.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.899 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:48.899 issued rwts: total=5632,5766,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:48.899 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:48.899 job2: (groupid=0, jobs=1): err= 0: pid=77643: Mon Jul 15 15:57:42 2024 00:11:48.899 read: IOPS=2110, BW=8441KiB/s (8644kB/s)(8568KiB/1015msec) 00:11:48.899 slat (usec): min=3, max=20816, avg=182.01, stdev=1182.43 00:11:48.899 clat (usec): min=5717, max=47083, avg=21689.27, stdev=8030.49 00:11:48.899 lat (usec): min=5732, max=47115, avg=21871.29, stdev=8110.43 00:11:48.899 clat percentiles (usec): 00:11:48.899 | 1.00th=[ 7767], 5.00th=[12125], 10.00th=[13304], 20.00th=[13698], 00:11:48.899 | 30.00th=[14222], 40.00th=[17957], 50.00th=[23725], 60.00th=[24773], 00:11:48.899 | 70.00th=[25297], 80.00th=[26346], 90.00th=[32900], 95.00th=[36963], 00:11:48.899 | 99.00th=[43254], 99.50th=[44827], 99.90th=[46400], 99.95th=[46400], 00:11:48.899 | 99.99th=[46924] 00:11:48.899 write: IOPS=2522, BW=9.85MiB/s (10.3MB/s)(10.0MiB/1015msec); 0 zone resets 00:11:48.899 slat (usec): min=4, max=19668, avg=231.43, stdev=1132.76 00:11:48.899 clat (msec): min=4, max=110, avg=32.03, stdev=19.04 00:11:48.899 lat (msec): min=4, max=110, avg=32.27, stdev=19.13 00:11:48.899 clat percentiles (msec): 00:11:48.899 | 1.00th=[ 7], 5.00th=[ 11], 10.00th=[ 20], 20.00th=[ 24], 00:11:48.899 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 27], 60.00th=[ 27], 00:11:48.899 | 70.00th=[ 27], 80.00th=[ 37], 90.00th=[ 63], 95.00th=[ 70], 00:11:48.899 | 99.00th=[ 110], 99.50th=[ 110], 99.90th=[ 111], 99.95th=[ 111], 00:11:48.899 | 99.99th=[ 111] 00:11:48.899 bw ( KiB/s): min= 8926, max=11272, per=15.13%, avg=10099.00, stdev=1658.87, samples=2 00:11:48.899 iops : min= 2231, max= 2818, avg=2524.50, stdev=415.07, samples=2 00:11:48.899 lat (msec) : 10=3.93%, 20=22.10%, 50=66.23%, 100=6.57%, 250=1.17% 00:11:48.899 cpu : usr=2.07%, sys=5.92%, ctx=332, majf=0, minf=11 00:11:48.899 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:11:48.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.899 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:48.899 issued rwts: total=2142,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:48.899 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:48.899 job3: (groupid=0, jobs=1): err= 0: pid=77644: Mon Jul 15 15:57:42 2024 00:11:48.899 read: IOPS=2509, BW=9.80MiB/s (10.3MB/s)(10.0MiB/1020msec) 00:11:48.899 slat (usec): min=6, max=22119, avg=198.68, stdev=1313.25 00:11:48.899 clat (usec): min=6930, max=81016, avg=21568.93, stdev=11623.99 00:11:48.899 lat (usec): min=6943, max=81049, avg=21767.61, stdev=11781.50 00:11:48.899 clat percentiles (usec): 00:11:48.899 | 1.00th=[ 7504], 5.00th=[11076], 10.00th=[12256], 20.00th=[13435], 00:11:48.899 | 30.00th=[13829], 40.00th=[15008], 50.00th=[17171], 60.00th=[24511], 00:11:48.899 | 70.00th=[25297], 80.00th=[26084], 90.00th=[28967], 95.00th=[42730], 00:11:48.899 | 99.00th=[79168], 99.50th=[80217], 99.90th=[81265], 99.95th=[81265], 00:11:48.899 | 99.99th=[81265] 00:11:48.899 write: IOPS=3001, BW=11.7MiB/s (12.3MB/s)(12.0MiB/1020msec); 0 zone resets 00:11:48.899 slat (usec): min=4, max=23997, avg=152.53, stdev=914.89 00:11:48.899 clat (usec): min=4190, max=80881, avg=24206.35, stdev=12407.49 00:11:48.899 lat (usec): min=4215, max=80903, avg=24358.88, stdev=12465.75 00:11:48.899 clat percentiles (usec): 00:11:48.899 | 1.00th=[ 5735], 5.00th=[ 9372], 10.00th=[11731], 20.00th=[12911], 00:11:48.899 | 30.00th=[20055], 40.00th=[23462], 50.00th=[24773], 60.00th=[25822], 00:11:48.899 | 70.00th=[26084], 80.00th=[26608], 90.00th=[29492], 95.00th=[53740], 00:11:48.899 | 99.00th=[69731], 99.50th=[69731], 99.90th=[80217], 99.95th=[81265], 00:11:48.899 | 99.99th=[81265] 00:11:48.899 bw ( KiB/s): min=11184, max=12288, per=17.58%, avg=11736.00, stdev=780.65, samples=2 00:11:48.899 iops : min= 2796, max= 3072, avg=2934.00, stdev=195.16, samples=2 00:11:48.899 lat (msec) : 10=4.30%, 20=35.70%, 50=55.48%, 100=4.52% 00:11:48.899 cpu : usr=3.14%, sys=7.46%, ctx=373, majf=0, minf=6 00:11:48.899 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:48.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.899 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:48.899 issued rwts: total=2560,3062,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:48.899 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:48.899 00:11:48.899 Run status group 0 (all jobs): 00:11:48.899 READ: bw=60.4MiB/s (63.3MB/s), 8441KiB/s-21.8MiB/s (8644kB/s-22.8MB/s), io=61.6MiB (64.6MB), run=1004-1020msec 00:11:48.899 WRITE: bw=65.2MiB/s (68.3MB/s), 9.85MiB/s-22.3MiB/s (10.3MB/s-23.4MB/s), io=66.5MiB (69.7MB), run=1004-1020msec 00:11:48.899 00:11:48.899 Disk stats (read/write): 00:11:48.899 nvme0n1: ios=4658/4888, merge=0/0, ticks=52429/50298, in_queue=102727, util=88.28% 00:11:48.899 nvme0n2: ios=4657/5087, merge=0/0, ticks=51229/51168, in_queue=102397, util=89.17% 00:11:48.899 nvme0n3: ios=2048/2071, merge=0/0, ticks=42621/63077, in_queue=105698, util=89.07% 00:11:48.899 nvme0n4: ios=2048/2535, merge=0/0, ticks=42938/58361, in_queue=101299, util=89.72% 00:11:48.899 15:57:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:48.899 15:57:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=77657 00:11:48.899 15:57:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:48.899 15:57:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:48.899 [global] 00:11:48.899 thread=1 00:11:48.899 invalidate=1 00:11:48.899 rw=read 00:11:48.899 time_based=1 00:11:48.900 runtime=10 00:11:48.900 ioengine=libaio 00:11:48.900 direct=1 00:11:48.900 bs=4096 00:11:48.900 iodepth=1 00:11:48.900 norandommap=1 00:11:48.900 numjobs=1 00:11:48.900 00:11:48.900 [job0] 00:11:48.900 filename=/dev/nvme0n1 00:11:48.900 [job1] 00:11:48.900 filename=/dev/nvme0n2 00:11:48.900 [job2] 00:11:48.900 filename=/dev/nvme0n3 00:11:48.900 [job3] 00:11:48.900 filename=/dev/nvme0n4 00:11:48.900 Could not set queue depth (nvme0n1) 00:11:48.900 Could not set queue depth (nvme0n2) 00:11:48.900 Could not set queue depth (nvme0n3) 00:11:48.900 Could not set queue depth (nvme0n4) 00:11:48.900 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:48.900 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:48.900 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:48.900 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:48.900 fio-3.35 00:11:48.900 Starting 4 threads 00:11:52.184 15:57:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:52.184 fio: pid=77705, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:52.184 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=30375936, buflen=4096 00:11:52.184 15:57:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:52.184 fio: pid=77704, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:52.184 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=32956416, buflen=4096 00:11:52.443 15:57:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:52.443 15:57:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:52.701 fio: pid=77701, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:52.701 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=41132032, buflen=4096 00:11:52.701 15:57:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:52.701 15:57:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:52.960 fio: pid=77703, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:52.960 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=22994944, buflen=4096 00:11:52.960 00:11:52.960 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77701: Mon Jul 15 15:57:46 2024 00:11:52.960 read: IOPS=2776, BW=10.8MiB/s (11.4MB/s)(39.2MiB/3617msec) 00:11:52.960 slat (usec): min=9, max=8806, avg=20.85, stdev=162.42 00:11:52.960 clat (usec): min=122, max=4456, avg=337.19, stdev=128.58 00:11:52.960 lat (usec): min=151, max=9125, avg=358.04, stdev=205.44 00:11:52.960 clat percentiles (usec): 00:11:52.960 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 157], 20.00th=[ 243], 00:11:52.960 | 30.00th=[ 314], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 367], 00:11:52.960 | 70.00th=[ 375], 80.00th=[ 392], 90.00th=[ 445], 95.00th=[ 506], 00:11:52.960 | 99.00th=[ 644], 99.50th=[ 717], 99.90th=[ 1237], 99.95th=[ 1762], 00:11:52.960 | 99.99th=[ 2769] 00:11:52.960 bw ( KiB/s): min= 9120, max=16671, per=22.43%, avg=10933.57, stdev=2602.38, samples=7 00:11:52.960 iops : min= 2280, max= 4167, avg=2733.29, stdev=650.32, samples=7 00:11:52.960 lat (usec) : 250=20.28%, 500=74.36%, 750=4.95%, 1000=0.23% 00:11:52.960 lat (msec) : 2=0.12%, 4=0.04%, 10=0.01% 00:11:52.960 cpu : usr=1.11%, sys=4.31%, ctx=10051, majf=0, minf=1 00:11:52.960 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:52.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:52.960 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:52.960 issued rwts: total=10043,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:52.960 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:52.960 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77703: Mon Jul 15 15:57:46 2024 00:11:52.960 read: IOPS=5643, BW=22.0MiB/s (23.1MB/s)(85.9MiB/3898msec) 00:11:52.960 slat (usec): min=13, max=7860, avg=18.30, stdev=120.34 00:11:52.960 clat (usec): min=128, max=1932, avg=157.27, stdev=28.55 00:11:52.960 lat (usec): min=142, max=8113, avg=175.57, stdev=124.62 00:11:52.960 clat percentiles (usec): 00:11:52.960 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 147], 00:11:52.960 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 157], 00:11:52.960 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 172], 95.00th=[ 188], 00:11:52.960 | 99.00th=[ 227], 99.50th=[ 235], 99.90th=[ 293], 99.95th=[ 424], 00:11:52.960 | 99.99th=[ 1631] 00:11:52.960 bw ( KiB/s): min=21360, max=23192, per=46.27%, avg=22554.00, stdev=644.61, samples=7 00:11:52.960 iops : min= 5340, max= 5798, avg=5638.43, stdev=161.18, samples=7 00:11:52.960 lat (usec) : 250=99.77%, 500=0.19%, 750=0.01%, 1000=0.01% 00:11:52.960 lat (msec) : 2=0.02% 00:11:52.960 cpu : usr=1.67%, sys=7.57%, ctx=22007, majf=0, minf=1 00:11:52.960 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:52.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:52.960 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:52.960 issued rwts: total=21999,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:52.960 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:52.960 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77704: Mon Jul 15 15:57:46 2024 00:11:52.960 read: IOPS=2446, BW=9785KiB/s (10.0MB/s)(31.4MiB/3289msec) 00:11:52.960 slat (usec): min=13, max=9520, avg=28.16, stdev=135.66 00:11:52.960 clat (usec): min=146, max=2410, avg=377.73, stdev=94.66 00:11:52.960 lat (usec): min=163, max=9925, avg=405.89, stdev=167.14 00:11:52.960 clat percentiles (usec): 00:11:52.960 | 1.00th=[ 167], 5.00th=[ 269], 10.00th=[ 285], 20.00th=[ 334], 00:11:52.960 | 30.00th=[ 347], 40.00th=[ 355], 50.00th=[ 363], 60.00th=[ 371], 00:11:52.960 | 70.00th=[ 388], 80.00th=[ 416], 90.00th=[ 478], 95.00th=[ 529], 00:11:52.960 | 99.00th=[ 676], 99.50th=[ 734], 99.90th=[ 1106], 99.95th=[ 1369], 00:11:52.960 | 99.99th=[ 2409] 00:11:52.960 bw ( KiB/s): min= 8864, max=10280, per=19.69%, avg=9600.00, stdev=500.14, samples=6 00:11:52.960 iops : min= 2216, max= 2570, avg=2400.00, stdev=125.03, samples=6 00:11:52.960 lat (usec) : 250=2.17%, 500=90.28%, 750=7.08%, 1000=0.30% 00:11:52.960 lat (msec) : 2=0.11%, 4=0.04% 00:11:52.960 cpu : usr=1.40%, sys=5.02%, ctx=8055, majf=0, minf=1 00:11:52.960 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:52.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:52.960 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:52.960 issued rwts: total=8047,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:52.960 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:52.960 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77705: Mon Jul 15 15:57:46 2024 00:11:52.960 read: IOPS=2498, BW=9991KiB/s (10.2MB/s)(29.0MiB/2969msec) 00:11:52.960 slat (usec): min=9, max=110, avg=17.17, stdev= 5.51 00:11:52.960 clat (usec): min=182, max=2575, avg=380.85, stdev=80.37 00:11:52.960 lat (usec): min=194, max=2591, avg=398.02, stdev=81.12 00:11:52.960 clat percentiles (usec): 00:11:52.960 | 1.00th=[ 194], 5.00th=[ 293], 10.00th=[ 330], 20.00th=[ 347], 00:11:52.960 | 30.00th=[ 351], 40.00th=[ 359], 50.00th=[ 367], 60.00th=[ 375], 00:11:52.960 | 70.00th=[ 388], 80.00th=[ 408], 90.00th=[ 465], 95.00th=[ 529], 00:11:52.960 | 99.00th=[ 652], 99.50th=[ 701], 99.90th=[ 889], 99.95th=[ 1270], 00:11:52.960 | 99.99th=[ 2573] 00:11:52.960 bw ( KiB/s): min= 9120, max=10960, per=20.55%, avg=10019.20, stdev=735.23, samples=5 00:11:52.960 iops : min= 2280, max= 2740, avg=2504.80, stdev=183.81, samples=5 00:11:52.960 lat (usec) : 250=3.57%, 500=89.78%, 750=6.39%, 1000=0.16% 00:11:52.960 lat (msec) : 2=0.07%, 4=0.01% 00:11:52.960 cpu : usr=1.18%, sys=3.81%, ctx=7419, majf=0, minf=1 00:11:52.960 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:52.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:52.960 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:52.960 issued rwts: total=7417,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:52.960 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:52.960 00:11:52.960 Run status group 0 (all jobs): 00:11:52.960 READ: bw=47.6MiB/s (49.9MB/s), 9785KiB/s-22.0MiB/s (10.0MB/s-23.1MB/s), io=186MiB (195MB), run=2969-3898msec 00:11:52.960 00:11:52.960 Disk stats (read/write): 00:11:52.960 nvme0n1: ios=10043/0, merge=0/0, ticks=3320/0, in_queue=3320, util=95.67% 00:11:52.960 nvme0n2: ios=21822/0, merge=0/0, ticks=3520/0, in_queue=3520, util=96.01% 00:11:52.960 nvme0n3: ios=7497/0, merge=0/0, ticks=2896/0, in_queue=2896, util=96.30% 00:11:52.960 nvme0n4: ios=7163/0, merge=0/0, ticks=2660/0, in_queue=2660, util=96.76% 00:11:52.960 15:57:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:52.960 15:57:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:53.219 15:57:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:53.219 15:57:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:53.478 15:57:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:53.478 15:57:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:53.737 15:57:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:53.737 15:57:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:53.995 15:57:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:53.995 15:57:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:54.254 15:57:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:54.254 15:57:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 77657 00:11:54.254 15:57:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:54.254 15:57:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:54.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.513 15:57:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:54.513 15:57:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:54.513 15:57:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:54.513 15:57:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.513 15:57:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:54.513 15:57:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.513 nvmf hotplug test: fio failed as expected 00:11:54.513 15:57:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:54.513 15:57:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:54.513 15:57:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:54.513 15:57:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:54.772 15:57:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:54.772 15:57:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:54.772 15:57:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:54.772 15:57:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:54.772 15:57:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:54.772 15:57:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:54.772 15:57:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:11:54.772 15:57:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:54.772 15:57:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:11:54.772 15:57:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:54.772 15:57:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:54.772 rmmod nvme_tcp 00:11:54.772 rmmod nvme_fabrics 00:11:54.772 rmmod nvme_keyring 00:11:54.772 15:57:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:54.772 15:57:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:11:54.772 15:57:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:11:54.772 15:57:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 77171 ']' 00:11:54.772 15:57:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 77171 00:11:54.772 15:57:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 77171 ']' 00:11:54.772 15:57:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 77171 00:11:54.772 15:57:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:11:54.772 15:57:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:54.772 15:57:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77171 00:11:54.772 killing process with pid 77171 00:11:54.772 15:57:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:54.772 15:57:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:54.772 15:57:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77171' 00:11:54.772 15:57:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 77171 00:11:54.773 15:57:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 77171 00:11:55.031 15:57:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:55.031 15:57:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:55.031 15:57:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:55.031 15:57:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:55.031 15:57:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:55.031 15:57:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.031 15:57:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:55.031 15:57:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.031 15:57:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:55.031 00:11:55.031 real 0m20.147s 00:11:55.031 user 1m17.727s 00:11:55.031 sys 0m9.007s 00:11:55.031 15:57:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:55.031 ************************************ 00:11:55.031 END TEST nvmf_fio_target 00:11:55.031 ************************************ 00:11:55.031 15:57:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.031 15:57:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:55.031 15:57:48 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:55.031 15:57:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:55.031 15:57:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:55.031 15:57:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:55.290 ************************************ 00:11:55.290 START TEST nvmf_bdevio 00:11:55.290 ************************************ 00:11:55.290 15:57:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:55.290 * Looking for test storage... 00:11:55.290 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:55.290 15:57:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:55.290 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:55.290 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:55.290 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:55.290 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:55.290 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:55.290 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:55.290 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:55.290 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:55.290 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:55.290 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:55.290 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.290 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:11:55.290 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:11:55.290 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.290 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.290 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:55.290 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:55.290 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:55.290 15:57:48 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.290 15:57:48 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.290 15:57:48 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.290 15:57:48 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.290 15:57:48 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.290 15:57:48 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.290 15:57:48 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:55.290 15:57:48 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.290 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:11:55.290 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:55.290 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:55.290 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:55.290 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.290 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.290 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:55.290 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:55.290 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:55.291 Cannot find device "nvmf_tgt_br" 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:55.291 Cannot find device "nvmf_tgt_br2" 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:55.291 Cannot find device "nvmf_tgt_br" 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:55.291 Cannot find device "nvmf_tgt_br2" 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:55.291 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:55.291 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:55.291 15:57:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:55.291 15:57:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:55.291 15:57:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:55.549 15:57:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:55.549 15:57:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:55.549 15:57:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:55.549 15:57:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:55.549 15:57:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:55.549 15:57:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:55.549 15:57:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:55.550 15:57:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:55.550 15:57:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:55.550 15:57:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:55.550 15:57:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:55.550 15:57:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:55.550 15:57:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:55.550 15:57:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:55.550 15:57:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:55.550 15:57:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:55.550 15:57:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:55.550 15:57:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:55.550 15:57:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:55.550 15:57:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:55.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:55.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:11:55.550 00:11:55.550 --- 10.0.0.2 ping statistics --- 00:11:55.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.550 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:11:55.550 15:57:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:55.550 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:55.550 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:11:55.550 00:11:55.550 --- 10.0.0.3 ping statistics --- 00:11:55.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.550 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:11:55.550 15:57:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:55.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:55.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:11:55.550 00:11:55.550 --- 10.0.0.1 ping statistics --- 00:11:55.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.550 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:11:55.550 15:57:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:55.550 15:57:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:11:55.550 15:57:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:55.550 15:57:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:55.550 15:57:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:55.550 15:57:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:55.550 15:57:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:55.550 15:57:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:55.550 15:57:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:55.550 15:57:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:55.550 15:57:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:55.550 15:57:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:55.550 15:57:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:55.550 15:57:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=78033 00:11:55.550 15:57:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:55.550 15:57:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 78033 00:11:55.550 15:57:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 78033 ']' 00:11:55.550 15:57:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.550 15:57:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:55.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.550 15:57:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.550 15:57:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:55.550 15:57:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:55.550 [2024-07-15 15:57:49.266625] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:11:55.550 [2024-07-15 15:57:49.266723] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:55.808 [2024-07-15 15:57:49.402535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:55.808 [2024-07-15 15:57:49.532612] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:55.808 [2024-07-15 15:57:49.532688] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:55.808 [2024-07-15 15:57:49.532703] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:55.808 [2024-07-15 15:57:49.532715] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:55.808 [2024-07-15 15:57:49.532724] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:55.808 [2024-07-15 15:57:49.533416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:55.808 [2024-07-15 15:57:49.533589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:55.808 [2024-07-15 15:57:49.533663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:55.808 [2024-07-15 15:57:49.533674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:56.764 15:57:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:56.764 15:57:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:11:56.764 15:57:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:56.764 15:57:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:56.764 15:57:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:56.764 15:57:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:56.764 15:57:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:56.764 15:57:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.764 15:57:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:56.764 [2024-07-15 15:57:50.309249] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:56.764 15:57:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.764 15:57:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:56.764 15:57:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.764 15:57:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:56.764 Malloc0 00:11:56.764 15:57:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.764 15:57:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:56.764 15:57:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.764 15:57:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:56.764 15:57:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.764 15:57:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:56.764 15:57:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.764 15:57:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:56.764 15:57:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.764 15:57:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:56.764 15:57:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.764 15:57:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:56.764 [2024-07-15 15:57:50.384590] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.764 15:57:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.764 15:57:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:56.764 15:57:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:56.764 15:57:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:11:56.764 15:57:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:11:56.764 15:57:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:56.764 15:57:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:56.764 { 00:11:56.764 "params": { 00:11:56.764 "name": "Nvme$subsystem", 00:11:56.764 "trtype": "$TEST_TRANSPORT", 00:11:56.764 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:56.764 "adrfam": "ipv4", 00:11:56.764 "trsvcid": "$NVMF_PORT", 00:11:56.764 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:56.764 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:56.764 "hdgst": ${hdgst:-false}, 00:11:56.764 "ddgst": ${ddgst:-false} 00:11:56.764 }, 00:11:56.764 "method": "bdev_nvme_attach_controller" 00:11:56.764 } 00:11:56.764 EOF 00:11:56.764 )") 00:11:56.764 15:57:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:11:56.764 15:57:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:11:56.764 15:57:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:11:56.764 15:57:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:56.764 "params": { 00:11:56.764 "name": "Nvme1", 00:11:56.764 "trtype": "tcp", 00:11:56.764 "traddr": "10.0.0.2", 00:11:56.764 "adrfam": "ipv4", 00:11:56.764 "trsvcid": "4420", 00:11:56.764 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:56.764 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:56.764 "hdgst": false, 00:11:56.764 "ddgst": false 00:11:56.764 }, 00:11:56.764 "method": "bdev_nvme_attach_controller" 00:11:56.764 }' 00:11:56.764 [2024-07-15 15:57:50.456442] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:11:56.764 [2024-07-15 15:57:50.456569] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78087 ] 00:11:57.023 [2024-07-15 15:57:50.597373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:57.023 [2024-07-15 15:57:50.749448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.023 [2024-07-15 15:57:50.749520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:57.023 [2024-07-15 15:57:50.749528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.281 I/O targets: 00:11:57.281 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:57.281 00:11:57.281 00:11:57.281 CUnit - A unit testing framework for C - Version 2.1-3 00:11:57.281 http://cunit.sourceforge.net/ 00:11:57.281 00:11:57.281 00:11:57.281 Suite: bdevio tests on: Nvme1n1 00:11:57.281 Test: blockdev write read block ...passed 00:11:57.540 Test: blockdev write zeroes read block ...passed 00:11:57.540 Test: blockdev write zeroes read no split ...passed 00:11:57.540 Test: blockdev write zeroes read split ...passed 00:11:57.540 Test: blockdev write zeroes read split partial ...passed 00:11:57.540 Test: blockdev reset ...[2024-07-15 15:57:51.052151] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:57.540 [2024-07-15 15:57:51.052291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae1180 (9): Bad file descriptor 00:11:57.540 [2024-07-15 15:57:51.071141] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:57.540 passed 00:11:57.540 Test: blockdev write read 8 blocks ...passed 00:11:57.540 Test: blockdev write read size > 128k ...passed 00:11:57.540 Test: blockdev write read invalid size ...passed 00:11:57.540 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:57.540 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:57.540 Test: blockdev write read max offset ...passed 00:11:57.540 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:57.540 Test: blockdev writev readv 8 blocks ...passed 00:11:57.540 Test: blockdev writev readv 30 x 1block ...passed 00:11:57.540 Test: blockdev writev readv block ...passed 00:11:57.540 Test: blockdev writev readv size > 128k ...passed 00:11:57.540 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:57.540 Test: blockdev comparev and writev ...[2024-07-15 15:57:51.245561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:57.540 [2024-07-15 15:57:51.245628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:57.540 [2024-07-15 15:57:51.245649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:57.540 [2024-07-15 15:57:51.245660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:57.540 [2024-07-15 15:57:51.246103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:57.540 [2024-07-15 15:57:51.246131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:57.540 [2024-07-15 15:57:51.246150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:57.540 [2024-07-15 15:57:51.246160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:57.540 [2024-07-15 15:57:51.246498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:57.540 [2024-07-15 15:57:51.246525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:57.540 [2024-07-15 15:57:51.246542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:57.540 [2024-07-15 15:57:51.246552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:57.540 [2024-07-15 15:57:51.246849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:57.540 [2024-07-15 15:57:51.246875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:57.540 [2024-07-15 15:57:51.246892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:57.540 [2024-07-15 15:57:51.246903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:57.799 passed 00:11:57.799 Test: blockdev nvme passthru rw ...passed 00:11:57.799 Test: blockdev nvme passthru vendor specific ...[2024-07-15 15:57:51.330378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:57.799 [2024-07-15 15:57:51.330447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:57.799 [2024-07-15 15:57:51.330815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:57.799 [2024-07-15 15:57:51.330847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:57.799 [2024-07-15 15:57:51.331079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:57.799 [2024-07-15 15:57:51.331110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:57.799 [2024-07-15 15:57:51.331319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:57.799 [2024-07-15 15:57:51.331349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:57.799 passed 00:11:57.799 Test: blockdev nvme admin passthru ...passed 00:11:57.799 Test: blockdev copy ...passed 00:11:57.799 00:11:57.799 Run Summary: Type Total Ran Passed Failed Inactive 00:11:57.799 suites 1 1 n/a 0 0 00:11:57.799 tests 23 23 23 0 0 00:11:57.799 asserts 152 152 152 0 n/a 00:11:57.799 00:11:57.799 Elapsed time = 0.891 seconds 00:11:58.057 15:57:51 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:58.057 15:57:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.057 15:57:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:58.057 15:57:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.057 15:57:51 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:58.057 15:57:51 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:58.057 15:57:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:58.057 15:57:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:11:58.057 15:57:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:58.057 15:57:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:11:58.057 15:57:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:58.057 15:57:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:58.057 rmmod nvme_tcp 00:11:58.057 rmmod nvme_fabrics 00:11:58.057 rmmod nvme_keyring 00:11:58.057 15:57:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:58.057 15:57:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:11:58.057 15:57:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:11:58.057 15:57:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 78033 ']' 00:11:58.057 15:57:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 78033 00:11:58.057 15:57:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 78033 ']' 00:11:58.057 15:57:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 78033 00:11:58.057 15:57:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:11:58.057 15:57:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:58.057 15:57:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78033 00:11:58.057 15:57:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:11:58.057 killing process with pid 78033 00:11:58.057 15:57:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:11:58.057 15:57:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78033' 00:11:58.057 15:57:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 78033 00:11:58.057 15:57:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 78033 00:11:58.315 15:57:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:58.315 15:57:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:58.315 15:57:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:58.315 15:57:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:58.315 15:57:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:58.315 15:57:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.315 15:57:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:58.315 15:57:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.315 15:57:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:58.315 00:11:58.315 real 0m3.242s 00:11:58.315 user 0m11.816s 00:11:58.315 sys 0m0.810s 00:11:58.315 15:57:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:58.315 15:57:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:58.315 ************************************ 00:11:58.315 END TEST nvmf_bdevio 00:11:58.315 ************************************ 00:11:58.574 15:57:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:58.574 15:57:52 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:58.574 15:57:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:58.574 15:57:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:58.574 15:57:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:58.574 ************************************ 00:11:58.574 START TEST nvmf_auth_target 00:11:58.574 ************************************ 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:58.574 * Looking for test storage... 00:11:58.574 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:58.574 Cannot find device "nvmf_tgt_br" 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:58.574 Cannot find device "nvmf_tgt_br2" 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:58.574 Cannot find device "nvmf_tgt_br" 00:11:58.574 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:11:58.575 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:58.575 Cannot find device "nvmf_tgt_br2" 00:11:58.575 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:11:58.575 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:58.575 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:58.575 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:58.575 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:58.575 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:11:58.575 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:58.575 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:58.575 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:11:58.575 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:58.575 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:58.834 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:58.834 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:11:58.834 00:11:58.834 --- 10.0.0.2 ping statistics --- 00:11:58.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.834 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:58.834 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:58.834 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:11:58.834 00:11:58.834 --- 10.0.0.3 ping statistics --- 00:11:58.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.834 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:58.834 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:58.834 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:11:58.834 00:11:58.834 --- 10.0.0.1 ping statistics --- 00:11:58.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.834 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=78264 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 78264 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 78264 ']' 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:58.834 15:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=78308 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e559b1f52b8dd2c580fc02fc8d5d2b0d472d001816206294 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Uz0 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e559b1f52b8dd2c580fc02fc8d5d2b0d472d001816206294 0 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e559b1f52b8dd2c580fc02fc8d5d2b0d472d001816206294 0 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e559b1f52b8dd2c580fc02fc8d5d2b0d472d001816206294 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Uz0 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Uz0 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.Uz0 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0f7b87782f58257a8c02dd23e050c6a3deca3450fac96cd726f0fcedcfa3a3a6 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Uv5 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0f7b87782f58257a8c02dd23e050c6a3deca3450fac96cd726f0fcedcfa3a3a6 3 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0f7b87782f58257a8c02dd23e050c6a3deca3450fac96cd726f0fcedcfa3a3a6 3 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0f7b87782f58257a8c02dd23e050c6a3deca3450fac96cd726f0fcedcfa3a3a6 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Uv5 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Uv5 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.Uv5 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=aa91e66e667d5b8b7508e43f04c76070 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.7HD 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key aa91e66e667d5b8b7508e43f04c76070 1 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 aa91e66e667d5b8b7508e43f04c76070 1 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=aa91e66e667d5b8b7508e43f04c76070 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.7HD 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.7HD 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.7HD 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7a960b1744c9324f382cc635930591ddbbbeb681aa88b9b3 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ZqD 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7a960b1744c9324f382cc635930591ddbbbeb681aa88b9b3 2 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7a960b1744c9324f382cc635930591ddbbbeb681aa88b9b3 2 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7a960b1744c9324f382cc635930591ddbbbeb681aa88b9b3 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ZqD 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ZqD 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.ZqD 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5afe5d35fb4c6fe49c97ba4c1a364151be350771540e5bbe 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ybI 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5afe5d35fb4c6fe49c97ba4c1a364151be350771540e5bbe 2 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5afe5d35fb4c6fe49c97ba4c1a364151be350771540e5bbe 2 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5afe5d35fb4c6fe49c97ba4c1a364151be350771540e5bbe 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:12:00.208 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:00.466 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ybI 00:12:00.466 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ybI 00:12:00.466 15:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.ybI 00:12:00.466 15:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:12:00.466 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:00.466 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:00.466 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:00.466 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:12:00.466 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:12:00.466 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:00.466 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=15f4cc0007feafd906f7b2025074444b 00:12:00.466 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:12:00.466 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.xbS 00:12:00.466 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 15f4cc0007feafd906f7b2025074444b 1 00:12:00.466 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 15f4cc0007feafd906f7b2025074444b 1 00:12:00.466 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:00.466 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:00.466 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=15f4cc0007feafd906f7b2025074444b 00:12:00.466 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:12:00.466 15:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:00.466 15:57:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.xbS 00:12:00.466 15:57:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.xbS 00:12:00.466 15:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.xbS 00:12:00.466 15:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:12:00.466 15:57:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:00.466 15:57:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:00.466 15:57:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:00.466 15:57:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:12:00.466 15:57:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:12:00.466 15:57:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:00.466 15:57:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3fc3812eb49e5fa32be40ccb0a4cbc07a17352b7abbc822a9c6f5881f5ec769c 00:12:00.466 15:57:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:12:00.466 15:57:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.cmT 00:12:00.466 15:57:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3fc3812eb49e5fa32be40ccb0a4cbc07a17352b7abbc822a9c6f5881f5ec769c 3 00:12:00.466 15:57:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3fc3812eb49e5fa32be40ccb0a4cbc07a17352b7abbc822a9c6f5881f5ec769c 3 00:12:00.466 15:57:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:00.466 15:57:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:00.466 15:57:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3fc3812eb49e5fa32be40ccb0a4cbc07a17352b7abbc822a9c6f5881f5ec769c 00:12:00.466 15:57:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:12:00.466 15:57:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:00.466 15:57:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.cmT 00:12:00.466 15:57:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.cmT 00:12:00.466 15:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.cmT 00:12:00.466 15:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:12:00.466 15:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 78264 00:12:00.466 15:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 78264 ']' 00:12:00.466 15:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.466 15:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:00.466 15:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.466 15:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:00.466 15:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.724 15:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:00.724 15:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:12:00.724 15:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 78308 /var/tmp/host.sock 00:12:00.724 15:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 78308 ']' 00:12:00.724 15:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:12:00.724 15:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:00.724 15:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:00.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:00.724 15:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:00.724 15:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.981 15:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:00.981 15:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:12:00.981 15:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:12:00.981 15:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.981 15:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.981 15:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.981 15:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:12:00.981 15:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Uz0 00:12:00.981 15:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.981 15:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.981 15:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.981 15:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Uz0 00:12:00.981 15:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Uz0 00:12:01.240 15:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.Uv5 ]] 00:12:01.240 15:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Uv5 00:12:01.240 15:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.240 15:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.498 15:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.498 15:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Uv5 00:12:01.498 15:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Uv5 00:12:01.801 15:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:12:01.801 15:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.7HD 00:12:01.801 15:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.801 15:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.801 15:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.801 15:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.7HD 00:12:01.801 15:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.7HD 00:12:01.801 15:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.ZqD ]] 00:12:01.801 15:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ZqD 00:12:01.801 15:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.801 15:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.060 15:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.060 15:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ZqD 00:12:02.060 15:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ZqD 00:12:02.060 15:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:12:02.060 15:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ybI 00:12:02.060 15:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.060 15:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.060 15:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.060 15:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.ybI 00:12:02.060 15:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.ybI 00:12:02.318 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.xbS ]] 00:12:02.318 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.xbS 00:12:02.318 15:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.318 15:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.318 15:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.318 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.xbS 00:12:02.318 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.xbS 00:12:02.576 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:12:02.576 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.cmT 00:12:02.576 15:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.576 15:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.576 15:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.576 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.cmT 00:12:02.576 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.cmT 00:12:02.833 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:12:02.833 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:12:02.833 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:02.833 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:02.833 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:02.833 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:03.090 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:12:03.090 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:03.090 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:03.090 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:03.090 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:03.090 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.090 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.090 15:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.090 15:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.090 15:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.090 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.090 15:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.660 00:12:03.660 15:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:03.660 15:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:03.660 15:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:03.921 15:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:03.921 15:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:03.921 15:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.921 15:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.921 15:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.921 15:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:03.921 { 00:12:03.921 "auth": { 00:12:03.921 "dhgroup": "null", 00:12:03.921 "digest": "sha256", 00:12:03.921 "state": "completed" 00:12:03.921 }, 00:12:03.921 "cntlid": 1, 00:12:03.921 "listen_address": { 00:12:03.921 "adrfam": "IPv4", 00:12:03.921 "traddr": "10.0.0.2", 00:12:03.921 "trsvcid": "4420", 00:12:03.921 "trtype": "TCP" 00:12:03.921 }, 00:12:03.921 "peer_address": { 00:12:03.921 "adrfam": "IPv4", 00:12:03.921 "traddr": "10.0.0.1", 00:12:03.921 "trsvcid": "57410", 00:12:03.921 "trtype": "TCP" 00:12:03.921 }, 00:12:03.921 "qid": 0, 00:12:03.921 "state": "enabled", 00:12:03.921 "thread": "nvmf_tgt_poll_group_000" 00:12:03.921 } 00:12:03.921 ]' 00:12:03.921 15:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:03.921 15:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:03.921 15:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:03.921 15:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:03.921 15:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:03.921 15:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:03.921 15:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:03.921 15:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.487 15:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:00:ZTU1OWIxZjUyYjhkZDJjNTgwZmMwMmZjOGQ1ZDJiMGQ0NzJkMDAxODE2MjA2Mjk0oRut5A==: --dhchap-ctrl-secret DHHC-1:03:MGY3Yjg3NzgyZjU4MjU3YThjMDJkZDIzZTA1MGM2YTNkZWNhMzQ1MGZhYzk2Y2Q3MjZmMGZjZWRjZmEzYTNhNhKHh4Y=: 00:12:08.684 15:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:08.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:08.684 15:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:12:08.684 15:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.684 15:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.684 15:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.684 15:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:08.684 15:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:08.684 15:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:08.942 15:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:12:08.942 15:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:08.942 15:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:08.942 15:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:08.942 15:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:08.942 15:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:08.942 15:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:08.942 15:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.942 15:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.942 15:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.942 15:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:08.942 15:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:09.516 00:12:09.516 15:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:09.516 15:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:09.516 15:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:09.516 15:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:09.516 15:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:09.516 15:58:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.516 15:58:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.516 15:58:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.516 15:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:09.516 { 00:12:09.516 "auth": { 00:12:09.516 "dhgroup": "null", 00:12:09.516 "digest": "sha256", 00:12:09.516 "state": "completed" 00:12:09.516 }, 00:12:09.516 "cntlid": 3, 00:12:09.516 "listen_address": { 00:12:09.516 "adrfam": "IPv4", 00:12:09.516 "traddr": "10.0.0.2", 00:12:09.516 "trsvcid": "4420", 00:12:09.516 "trtype": "TCP" 00:12:09.516 }, 00:12:09.516 "peer_address": { 00:12:09.516 "adrfam": "IPv4", 00:12:09.516 "traddr": "10.0.0.1", 00:12:09.516 "trsvcid": "36564", 00:12:09.516 "trtype": "TCP" 00:12:09.516 }, 00:12:09.516 "qid": 0, 00:12:09.516 "state": "enabled", 00:12:09.516 "thread": "nvmf_tgt_poll_group_000" 00:12:09.516 } 00:12:09.516 ]' 00:12:09.516 15:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:09.772 15:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:09.772 15:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:09.772 15:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:09.772 15:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:09.772 15:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:09.772 15:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:09.772 15:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.029 15:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:01:YWE5MWU2NmU2NjdkNWI4Yjc1MDhlNDNmMDRjNzYwNzAFx80v: --dhchap-ctrl-secret DHHC-1:02:N2E5NjBiMTc0NGM5MzI0ZjM4MmNjNjM1OTMwNTkxZGRiYmJlYjY4MWFhODhiOWIzJerZQg==: 00:12:10.961 15:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:10.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:10.961 15:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:12:10.962 15:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.962 15:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.962 15:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.962 15:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:10.962 15:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:10.962 15:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:10.962 15:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:12:10.962 15:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:10.962 15:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:10.962 15:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:10.962 15:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:10.962 15:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:10.962 15:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:10.962 15:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.962 15:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.962 15:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.962 15:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:10.962 15:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:11.220 00:12:11.220 15:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:11.220 15:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:11.220 15:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:11.478 15:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:11.478 15:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:11.478 15:58:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.478 15:58:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.478 15:58:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.478 15:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:11.478 { 00:12:11.478 "auth": { 00:12:11.478 "dhgroup": "null", 00:12:11.478 "digest": "sha256", 00:12:11.478 "state": "completed" 00:12:11.478 }, 00:12:11.478 "cntlid": 5, 00:12:11.478 "listen_address": { 00:12:11.478 "adrfam": "IPv4", 00:12:11.478 "traddr": "10.0.0.2", 00:12:11.478 "trsvcid": "4420", 00:12:11.478 "trtype": "TCP" 00:12:11.478 }, 00:12:11.478 "peer_address": { 00:12:11.478 "adrfam": "IPv4", 00:12:11.478 "traddr": "10.0.0.1", 00:12:11.478 "trsvcid": "36582", 00:12:11.478 "trtype": "TCP" 00:12:11.478 }, 00:12:11.478 "qid": 0, 00:12:11.478 "state": "enabled", 00:12:11.478 "thread": "nvmf_tgt_poll_group_000" 00:12:11.478 } 00:12:11.478 ]' 00:12:11.478 15:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:11.736 15:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:11.736 15:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:11.736 15:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:11.736 15:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:11.736 15:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:11.736 15:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:11.736 15:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:11.995 15:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:02:NWFmZTVkMzVmYjRjNmZlNDljOTdiYTRjMWEzNjQxNTFiZTM1MDc3MTU0MGU1YmJlMUz6Eg==: --dhchap-ctrl-secret DHHC-1:01:MTVmNGNjMDAwN2ZlYWZkOTA2ZjdiMjAyNTA3NDQ0NGIYN1ne: 00:12:12.994 15:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:12.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:12.994 15:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:12:12.994 15:58:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.994 15:58:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.994 15:58:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.994 15:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:12.994 15:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:12.994 15:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:12.994 15:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:12:12.994 15:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:12.994 15:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:12.994 15:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:12.994 15:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:12.994 15:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:12.994 15:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key3 00:12:12.994 15:58:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.994 15:58:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.994 15:58:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.994 15:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:12.994 15:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:13.251 00:12:13.251 15:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:13.251 15:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:13.251 15:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:13.510 15:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:13.510 15:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:13.510 15:58:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.510 15:58:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.510 15:58:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.510 15:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:13.510 { 00:12:13.510 "auth": { 00:12:13.510 "dhgroup": "null", 00:12:13.510 "digest": "sha256", 00:12:13.510 "state": "completed" 00:12:13.510 }, 00:12:13.510 "cntlid": 7, 00:12:13.510 "listen_address": { 00:12:13.510 "adrfam": "IPv4", 00:12:13.510 "traddr": "10.0.0.2", 00:12:13.510 "trsvcid": "4420", 00:12:13.510 "trtype": "TCP" 00:12:13.510 }, 00:12:13.510 "peer_address": { 00:12:13.510 "adrfam": "IPv4", 00:12:13.510 "traddr": "10.0.0.1", 00:12:13.510 "trsvcid": "36598", 00:12:13.510 "trtype": "TCP" 00:12:13.510 }, 00:12:13.510 "qid": 0, 00:12:13.510 "state": "enabled", 00:12:13.510 "thread": "nvmf_tgt_poll_group_000" 00:12:13.510 } 00:12:13.510 ]' 00:12:13.510 15:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:13.768 15:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:13.768 15:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:13.768 15:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:13.768 15:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:13.768 15:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:13.768 15:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:13.768 15:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.026 15:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:03:M2ZjMzgxMmViNDllNWZhMzJiZTQwY2NiMGE0Y2JjMDdhMTczNTJiN2FiYmM4MjJhOWM2ZjU4ODFmNWVjNzY5Y40KTZo=: 00:12:14.959 15:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:14.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:14.959 15:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:12:14.959 15:58:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.959 15:58:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.959 15:58:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.959 15:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:14.959 15:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:14.959 15:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:14.959 15:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:15.217 15:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:12:15.217 15:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:15.217 15:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:15.217 15:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:15.217 15:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:15.217 15:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.217 15:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:15.217 15:58:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.217 15:58:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.217 15:58:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.217 15:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:15.217 15:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:15.475 00:12:15.475 15:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:15.475 15:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:15.475 15:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:15.733 15:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:15.733 15:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:15.733 15:58:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.733 15:58:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.733 15:58:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.733 15:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:15.733 { 00:12:15.733 "auth": { 00:12:15.733 "dhgroup": "ffdhe2048", 00:12:15.733 "digest": "sha256", 00:12:15.733 "state": "completed" 00:12:15.733 }, 00:12:15.733 "cntlid": 9, 00:12:15.733 "listen_address": { 00:12:15.733 "adrfam": "IPv4", 00:12:15.733 "traddr": "10.0.0.2", 00:12:15.733 "trsvcid": "4420", 00:12:15.733 "trtype": "TCP" 00:12:15.733 }, 00:12:15.733 "peer_address": { 00:12:15.733 "adrfam": "IPv4", 00:12:15.733 "traddr": "10.0.0.1", 00:12:15.733 "trsvcid": "48258", 00:12:15.733 "trtype": "TCP" 00:12:15.733 }, 00:12:15.733 "qid": 0, 00:12:15.733 "state": "enabled", 00:12:15.733 "thread": "nvmf_tgt_poll_group_000" 00:12:15.733 } 00:12:15.733 ]' 00:12:15.733 15:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:15.733 15:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:15.733 15:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:15.733 15:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:15.733 15:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:15.733 15:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:15.733 15:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:15.733 15:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.300 15:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:00:ZTU1OWIxZjUyYjhkZDJjNTgwZmMwMmZjOGQ1ZDJiMGQ0NzJkMDAxODE2MjA2Mjk0oRut5A==: --dhchap-ctrl-secret DHHC-1:03:MGY3Yjg3NzgyZjU4MjU3YThjMDJkZDIzZTA1MGM2YTNkZWNhMzQ1MGZhYzk2Y2Q3MjZmMGZjZWRjZmEzYTNhNhKHh4Y=: 00:12:16.866 15:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:16.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:16.866 15:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:12:16.866 15:58:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.866 15:58:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.866 15:58:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.866 15:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:16.866 15:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:16.866 15:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:17.123 15:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:12:17.123 15:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:17.123 15:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:17.123 15:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:17.123 15:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:17.123 15:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.123 15:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:17.123 15:58:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.123 15:58:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.123 15:58:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.123 15:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:17.123 15:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:17.390 00:12:17.390 15:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:17.390 15:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:17.390 15:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:17.651 15:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:17.652 15:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:17.652 15:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.652 15:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.652 15:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.652 15:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:17.652 { 00:12:17.652 "auth": { 00:12:17.652 "dhgroup": "ffdhe2048", 00:12:17.652 "digest": "sha256", 00:12:17.652 "state": "completed" 00:12:17.652 }, 00:12:17.652 "cntlid": 11, 00:12:17.652 "listen_address": { 00:12:17.652 "adrfam": "IPv4", 00:12:17.652 "traddr": "10.0.0.2", 00:12:17.652 "trsvcid": "4420", 00:12:17.652 "trtype": "TCP" 00:12:17.652 }, 00:12:17.652 "peer_address": { 00:12:17.652 "adrfam": "IPv4", 00:12:17.652 "traddr": "10.0.0.1", 00:12:17.652 "trsvcid": "48274", 00:12:17.652 "trtype": "TCP" 00:12:17.652 }, 00:12:17.652 "qid": 0, 00:12:17.652 "state": "enabled", 00:12:17.652 "thread": "nvmf_tgt_poll_group_000" 00:12:17.652 } 00:12:17.652 ]' 00:12:17.652 15:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:17.910 15:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:17.910 15:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:17.910 15:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:17.910 15:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:17.910 15:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:17.910 15:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:17.910 15:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.170 15:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:01:YWE5MWU2NmU2NjdkNWI4Yjc1MDhlNDNmMDRjNzYwNzAFx80v: --dhchap-ctrl-secret DHHC-1:02:N2E5NjBiMTc0NGM5MzI0ZjM4MmNjNjM1OTMwNTkxZGRiYmJlYjY4MWFhODhiOWIzJerZQg==: 00:12:19.105 15:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.105 15:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:12:19.105 15:58:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.105 15:58:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.105 15:58:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.105 15:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:19.105 15:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:19.105 15:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:19.105 15:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:12:19.105 15:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:19.105 15:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:19.105 15:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:19.105 15:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:19.105 15:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.105 15:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:19.105 15:58:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.105 15:58:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.105 15:58:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.105 15:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:19.105 15:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:19.670 00:12:19.670 15:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:19.670 15:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.670 15:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:19.928 15:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:19.928 15:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:19.928 15:58:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.928 15:58:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.928 15:58:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.928 15:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:19.928 { 00:12:19.928 "auth": { 00:12:19.928 "dhgroup": "ffdhe2048", 00:12:19.928 "digest": "sha256", 00:12:19.928 "state": "completed" 00:12:19.928 }, 00:12:19.928 "cntlid": 13, 00:12:19.928 "listen_address": { 00:12:19.928 "adrfam": "IPv4", 00:12:19.928 "traddr": "10.0.0.2", 00:12:19.928 "trsvcid": "4420", 00:12:19.928 "trtype": "TCP" 00:12:19.928 }, 00:12:19.928 "peer_address": { 00:12:19.928 "adrfam": "IPv4", 00:12:19.928 "traddr": "10.0.0.1", 00:12:19.928 "trsvcid": "48298", 00:12:19.928 "trtype": "TCP" 00:12:19.928 }, 00:12:19.928 "qid": 0, 00:12:19.928 "state": "enabled", 00:12:19.928 "thread": "nvmf_tgt_poll_group_000" 00:12:19.928 } 00:12:19.928 ]' 00:12:19.928 15:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:19.928 15:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:19.928 15:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:19.928 15:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:19.928 15:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:19.928 15:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:19.928 15:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:19.928 15:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:20.186 15:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:02:NWFmZTVkMzVmYjRjNmZlNDljOTdiYTRjMWEzNjQxNTFiZTM1MDc3MTU0MGU1YmJlMUz6Eg==: --dhchap-ctrl-secret DHHC-1:01:MTVmNGNjMDAwN2ZlYWZkOTA2ZjdiMjAyNTA3NDQ0NGIYN1ne: 00:12:21.120 15:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:21.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:21.120 15:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:12:21.120 15:58:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.120 15:58:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.120 15:58:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.120 15:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:21.120 15:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:21.120 15:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:21.377 15:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:12:21.377 15:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:21.377 15:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:21.377 15:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:21.377 15:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:21.377 15:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.377 15:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key3 00:12:21.377 15:58:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.377 15:58:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.377 15:58:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.377 15:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:21.377 15:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:21.635 00:12:21.635 15:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:21.635 15:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:21.635 15:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:21.894 15:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:21.894 15:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:21.894 15:58:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.894 15:58:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.894 15:58:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.894 15:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:21.894 { 00:12:21.894 "auth": { 00:12:21.894 "dhgroup": "ffdhe2048", 00:12:21.894 "digest": "sha256", 00:12:21.894 "state": "completed" 00:12:21.894 }, 00:12:21.894 "cntlid": 15, 00:12:21.894 "listen_address": { 00:12:21.894 "adrfam": "IPv4", 00:12:21.894 "traddr": "10.0.0.2", 00:12:21.894 "trsvcid": "4420", 00:12:21.894 "trtype": "TCP" 00:12:21.894 }, 00:12:21.894 "peer_address": { 00:12:21.894 "adrfam": "IPv4", 00:12:21.894 "traddr": "10.0.0.1", 00:12:21.894 "trsvcid": "48318", 00:12:21.894 "trtype": "TCP" 00:12:21.894 }, 00:12:21.894 "qid": 0, 00:12:21.894 "state": "enabled", 00:12:21.894 "thread": "nvmf_tgt_poll_group_000" 00:12:21.894 } 00:12:21.894 ]' 00:12:21.894 15:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:21.894 15:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:21.894 15:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:21.894 15:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:21.894 15:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:22.151 15:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:22.151 15:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:22.151 15:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.409 15:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:03:M2ZjMzgxMmViNDllNWZhMzJiZTQwY2NiMGE0Y2JjMDdhMTczNTJiN2FiYmM4MjJhOWM2ZjU4ODFmNWVjNzY5Y40KTZo=: 00:12:22.976 15:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.976 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.976 15:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:12:22.976 15:58:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.976 15:58:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.976 15:58:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.976 15:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:22.976 15:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:22.976 15:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:22.976 15:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:23.233 15:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:12:23.233 15:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:23.233 15:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:23.234 15:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:23.234 15:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:23.234 15:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:23.234 15:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.234 15:58:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.234 15:58:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.234 15:58:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.234 15:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.234 15:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.493 00:12:23.493 15:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:23.493 15:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.493 15:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:23.751 15:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.751 15:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.751 15:58:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.751 15:58:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.751 15:58:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.751 15:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:23.751 { 00:12:23.751 "auth": { 00:12:23.751 "dhgroup": "ffdhe3072", 00:12:23.751 "digest": "sha256", 00:12:23.751 "state": "completed" 00:12:23.751 }, 00:12:23.751 "cntlid": 17, 00:12:23.751 "listen_address": { 00:12:23.751 "adrfam": "IPv4", 00:12:23.751 "traddr": "10.0.0.2", 00:12:23.751 "trsvcid": "4420", 00:12:23.751 "trtype": "TCP" 00:12:23.751 }, 00:12:23.751 "peer_address": { 00:12:23.751 "adrfam": "IPv4", 00:12:23.751 "traddr": "10.0.0.1", 00:12:23.751 "trsvcid": "48348", 00:12:23.751 "trtype": "TCP" 00:12:23.751 }, 00:12:23.751 "qid": 0, 00:12:23.751 "state": "enabled", 00:12:23.751 "thread": "nvmf_tgt_poll_group_000" 00:12:23.751 } 00:12:23.751 ]' 00:12:23.751 15:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:23.751 15:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:23.751 15:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:24.009 15:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:24.009 15:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:24.009 15:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.009 15:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.009 15:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.267 15:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:00:ZTU1OWIxZjUyYjhkZDJjNTgwZmMwMmZjOGQ1ZDJiMGQ0NzJkMDAxODE2MjA2Mjk0oRut5A==: --dhchap-ctrl-secret DHHC-1:03:MGY3Yjg3NzgyZjU4MjU3YThjMDJkZDIzZTA1MGM2YTNkZWNhMzQ1MGZhYzk2Y2Q3MjZmMGZjZWRjZmEzYTNhNhKHh4Y=: 00:12:24.833 15:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.833 15:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:12:24.833 15:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.833 15:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.833 15:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.833 15:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:24.833 15:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:24.833 15:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:25.090 15:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:12:25.091 15:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:25.091 15:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:25.091 15:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:25.091 15:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:25.091 15:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.091 15:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.091 15:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.091 15:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.091 15:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.091 15:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.091 15:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.349 00:12:25.349 15:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:25.349 15:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:25.349 15:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:25.606 15:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.606 15:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:25.606 15:58:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.606 15:58:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.606 15:58:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.606 15:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:25.606 { 00:12:25.606 "auth": { 00:12:25.606 "dhgroup": "ffdhe3072", 00:12:25.606 "digest": "sha256", 00:12:25.606 "state": "completed" 00:12:25.606 }, 00:12:25.606 "cntlid": 19, 00:12:25.606 "listen_address": { 00:12:25.606 "adrfam": "IPv4", 00:12:25.606 "traddr": "10.0.0.2", 00:12:25.606 "trsvcid": "4420", 00:12:25.606 "trtype": "TCP" 00:12:25.606 }, 00:12:25.606 "peer_address": { 00:12:25.606 "adrfam": "IPv4", 00:12:25.606 "traddr": "10.0.0.1", 00:12:25.606 "trsvcid": "47796", 00:12:25.606 "trtype": "TCP" 00:12:25.606 }, 00:12:25.606 "qid": 0, 00:12:25.606 "state": "enabled", 00:12:25.606 "thread": "nvmf_tgt_poll_group_000" 00:12:25.606 } 00:12:25.606 ]' 00:12:25.606 15:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:25.863 15:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:25.863 15:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:25.863 15:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:25.863 15:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:25.863 15:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:25.863 15:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.863 15:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.120 15:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:01:YWE5MWU2NmU2NjdkNWI4Yjc1MDhlNDNmMDRjNzYwNzAFx80v: --dhchap-ctrl-secret DHHC-1:02:N2E5NjBiMTc0NGM5MzI0ZjM4MmNjNjM1OTMwNTkxZGRiYmJlYjY4MWFhODhiOWIzJerZQg==: 00:12:27.048 15:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:27.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:27.048 15:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:12:27.048 15:58:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.048 15:58:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.048 15:58:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.048 15:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:27.048 15:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:27.048 15:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:27.048 15:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:12:27.048 15:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:27.048 15:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:27.048 15:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:27.048 15:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:27.048 15:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.048 15:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.048 15:58:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.048 15:58:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.048 15:58:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.048 15:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.049 15:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.305 00:12:27.305 15:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:27.305 15:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:27.305 15:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:27.870 15:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:27.870 15:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:27.870 15:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.870 15:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.870 15:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.870 15:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:27.870 { 00:12:27.870 "auth": { 00:12:27.870 "dhgroup": "ffdhe3072", 00:12:27.870 "digest": "sha256", 00:12:27.870 "state": "completed" 00:12:27.870 }, 00:12:27.870 "cntlid": 21, 00:12:27.870 "listen_address": { 00:12:27.870 "adrfam": "IPv4", 00:12:27.870 "traddr": "10.0.0.2", 00:12:27.870 "trsvcid": "4420", 00:12:27.870 "trtype": "TCP" 00:12:27.870 }, 00:12:27.870 "peer_address": { 00:12:27.870 "adrfam": "IPv4", 00:12:27.870 "traddr": "10.0.0.1", 00:12:27.870 "trsvcid": "47830", 00:12:27.870 "trtype": "TCP" 00:12:27.870 }, 00:12:27.870 "qid": 0, 00:12:27.870 "state": "enabled", 00:12:27.870 "thread": "nvmf_tgt_poll_group_000" 00:12:27.870 } 00:12:27.870 ]' 00:12:27.870 15:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:27.870 15:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:27.870 15:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:27.870 15:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:27.870 15:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:27.870 15:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:27.870 15:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.870 15:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:28.167 15:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:02:NWFmZTVkMzVmYjRjNmZlNDljOTdiYTRjMWEzNjQxNTFiZTM1MDc3MTU0MGU1YmJlMUz6Eg==: --dhchap-ctrl-secret DHHC-1:01:MTVmNGNjMDAwN2ZlYWZkOTA2ZjdiMjAyNTA3NDQ0NGIYN1ne: 00:12:28.733 15:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.733 15:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:12:28.733 15:58:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.733 15:58:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.733 15:58:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.733 15:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:28.733 15:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:28.733 15:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:28.991 15:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:12:28.991 15:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:28.991 15:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:28.991 15:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:28.991 15:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:28.991 15:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.991 15:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key3 00:12:28.991 15:58:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.991 15:58:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.991 15:58:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.991 15:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:28.991 15:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:29.250 00:12:29.509 15:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:29.509 15:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:29.509 15:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:29.509 15:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.509 15:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:29.509 15:58:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.509 15:58:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.767 15:58:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.767 15:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:29.767 { 00:12:29.767 "auth": { 00:12:29.767 "dhgroup": "ffdhe3072", 00:12:29.767 "digest": "sha256", 00:12:29.767 "state": "completed" 00:12:29.767 }, 00:12:29.767 "cntlid": 23, 00:12:29.767 "listen_address": { 00:12:29.767 "adrfam": "IPv4", 00:12:29.767 "traddr": "10.0.0.2", 00:12:29.767 "trsvcid": "4420", 00:12:29.767 "trtype": "TCP" 00:12:29.767 }, 00:12:29.767 "peer_address": { 00:12:29.767 "adrfam": "IPv4", 00:12:29.767 "traddr": "10.0.0.1", 00:12:29.767 "trsvcid": "47848", 00:12:29.767 "trtype": "TCP" 00:12:29.767 }, 00:12:29.767 "qid": 0, 00:12:29.767 "state": "enabled", 00:12:29.767 "thread": "nvmf_tgt_poll_group_000" 00:12:29.767 } 00:12:29.767 ]' 00:12:29.767 15:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:29.767 15:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:29.767 15:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:29.767 15:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:29.767 15:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:29.767 15:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.767 15:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.767 15:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.031 15:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:03:M2ZjMzgxMmViNDllNWZhMzJiZTQwY2NiMGE0Y2JjMDdhMTczNTJiN2FiYmM4MjJhOWM2ZjU4ODFmNWVjNzY5Y40KTZo=: 00:12:30.964 15:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.964 15:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:12:30.964 15:58:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.964 15:58:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.964 15:58:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.964 15:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:30.964 15:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:30.964 15:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:30.964 15:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:31.223 15:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:12:31.223 15:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:31.223 15:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:31.223 15:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:31.223 15:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:31.223 15:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:31.223 15:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:31.223 15:58:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.223 15:58:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.223 15:58:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.223 15:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:31.223 15:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:31.481 00:12:31.481 15:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:31.481 15:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.481 15:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:31.739 15:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.739 15:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.739 15:58:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.739 15:58:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.739 15:58:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.739 15:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:31.739 { 00:12:31.739 "auth": { 00:12:31.739 "dhgroup": "ffdhe4096", 00:12:31.739 "digest": "sha256", 00:12:31.739 "state": "completed" 00:12:31.739 }, 00:12:31.739 "cntlid": 25, 00:12:31.739 "listen_address": { 00:12:31.739 "adrfam": "IPv4", 00:12:31.739 "traddr": "10.0.0.2", 00:12:31.739 "trsvcid": "4420", 00:12:31.739 "trtype": "TCP" 00:12:31.739 }, 00:12:31.739 "peer_address": { 00:12:31.739 "adrfam": "IPv4", 00:12:31.739 "traddr": "10.0.0.1", 00:12:31.739 "trsvcid": "47876", 00:12:31.739 "trtype": "TCP" 00:12:31.739 }, 00:12:31.739 "qid": 0, 00:12:31.739 "state": "enabled", 00:12:31.739 "thread": "nvmf_tgt_poll_group_000" 00:12:31.739 } 00:12:31.739 ]' 00:12:31.739 15:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:31.997 15:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:31.997 15:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:31.997 15:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:31.997 15:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:31.997 15:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.997 15:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.997 15:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:32.256 15:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:00:ZTU1OWIxZjUyYjhkZDJjNTgwZmMwMmZjOGQ1ZDJiMGQ0NzJkMDAxODE2MjA2Mjk0oRut5A==: --dhchap-ctrl-secret DHHC-1:03:MGY3Yjg3NzgyZjU4MjU3YThjMDJkZDIzZTA1MGM2YTNkZWNhMzQ1MGZhYzk2Y2Q3MjZmMGZjZWRjZmEzYTNhNhKHh4Y=: 00:12:33.191 15:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:33.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:33.191 15:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:12:33.191 15:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.191 15:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.191 15:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.191 15:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:33.191 15:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:33.191 15:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:33.191 15:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:12:33.191 15:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:33.191 15:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:33.191 15:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:33.191 15:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:33.191 15:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:33.191 15:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.191 15:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.191 15:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.191 15:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.191 15:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.191 15:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.758 00:12:33.758 15:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:33.758 15:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.758 15:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:34.016 15:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:34.016 15:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:34.016 15:58:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.016 15:58:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.016 15:58:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.016 15:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:34.016 { 00:12:34.016 "auth": { 00:12:34.016 "dhgroup": "ffdhe4096", 00:12:34.016 "digest": "sha256", 00:12:34.016 "state": "completed" 00:12:34.016 }, 00:12:34.016 "cntlid": 27, 00:12:34.016 "listen_address": { 00:12:34.016 "adrfam": "IPv4", 00:12:34.016 "traddr": "10.0.0.2", 00:12:34.016 "trsvcid": "4420", 00:12:34.016 "trtype": "TCP" 00:12:34.016 }, 00:12:34.016 "peer_address": { 00:12:34.016 "adrfam": "IPv4", 00:12:34.016 "traddr": "10.0.0.1", 00:12:34.016 "trsvcid": "47904", 00:12:34.016 "trtype": "TCP" 00:12:34.016 }, 00:12:34.016 "qid": 0, 00:12:34.016 "state": "enabled", 00:12:34.016 "thread": "nvmf_tgt_poll_group_000" 00:12:34.016 } 00:12:34.016 ]' 00:12:34.016 15:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:34.016 15:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:34.016 15:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:34.016 15:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:34.016 15:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:34.274 15:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:34.274 15:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:34.274 15:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.532 15:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:01:YWE5MWU2NmU2NjdkNWI4Yjc1MDhlNDNmMDRjNzYwNzAFx80v: --dhchap-ctrl-secret DHHC-1:02:N2E5NjBiMTc0NGM5MzI0ZjM4MmNjNjM1OTMwNTkxZGRiYmJlYjY4MWFhODhiOWIzJerZQg==: 00:12:35.099 15:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:35.099 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:35.099 15:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:12:35.099 15:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.099 15:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.099 15:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.099 15:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:35.099 15:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:35.099 15:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:35.358 15:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:12:35.358 15:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:35.358 15:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:35.358 15:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:35.358 15:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:35.358 15:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:35.358 15:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:35.358 15:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.358 15:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.358 15:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.358 15:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:35.358 15:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:35.925 00:12:35.925 15:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:35.925 15:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:35.925 15:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.184 15:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.184 15:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.184 15:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.184 15:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.184 15:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.184 15:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:36.184 { 00:12:36.184 "auth": { 00:12:36.184 "dhgroup": "ffdhe4096", 00:12:36.184 "digest": "sha256", 00:12:36.184 "state": "completed" 00:12:36.184 }, 00:12:36.184 "cntlid": 29, 00:12:36.184 "listen_address": { 00:12:36.184 "adrfam": "IPv4", 00:12:36.184 "traddr": "10.0.0.2", 00:12:36.184 "trsvcid": "4420", 00:12:36.184 "trtype": "TCP" 00:12:36.184 }, 00:12:36.184 "peer_address": { 00:12:36.184 "adrfam": "IPv4", 00:12:36.184 "traddr": "10.0.0.1", 00:12:36.184 "trsvcid": "40136", 00:12:36.184 "trtype": "TCP" 00:12:36.184 }, 00:12:36.184 "qid": 0, 00:12:36.184 "state": "enabled", 00:12:36.184 "thread": "nvmf_tgt_poll_group_000" 00:12:36.184 } 00:12:36.184 ]' 00:12:36.184 15:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:36.184 15:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:36.184 15:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:36.184 15:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:36.184 15:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:36.442 15:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.442 15:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.442 15:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.701 15:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:02:NWFmZTVkMzVmYjRjNmZlNDljOTdiYTRjMWEzNjQxNTFiZTM1MDc3MTU0MGU1YmJlMUz6Eg==: --dhchap-ctrl-secret DHHC-1:01:MTVmNGNjMDAwN2ZlYWZkOTA2ZjdiMjAyNTA3NDQ0NGIYN1ne: 00:12:37.266 15:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:37.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:37.266 15:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:12:37.266 15:58:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.266 15:58:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.266 15:58:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.266 15:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:37.266 15:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:37.266 15:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:37.524 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:12:37.524 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:37.524 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:37.524 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:37.524 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:37.524 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.524 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key3 00:12:37.524 15:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.524 15:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.524 15:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.524 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:37.524 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:38.090 00:12:38.090 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:38.090 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.090 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:38.090 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.090 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.090 15:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.090 15:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.349 15:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.349 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:38.349 { 00:12:38.349 "auth": { 00:12:38.349 "dhgroup": "ffdhe4096", 00:12:38.349 "digest": "sha256", 00:12:38.349 "state": "completed" 00:12:38.349 }, 00:12:38.349 "cntlid": 31, 00:12:38.349 "listen_address": { 00:12:38.349 "adrfam": "IPv4", 00:12:38.349 "traddr": "10.0.0.2", 00:12:38.349 "trsvcid": "4420", 00:12:38.349 "trtype": "TCP" 00:12:38.349 }, 00:12:38.349 "peer_address": { 00:12:38.349 "adrfam": "IPv4", 00:12:38.349 "traddr": "10.0.0.1", 00:12:38.349 "trsvcid": "40160", 00:12:38.349 "trtype": "TCP" 00:12:38.349 }, 00:12:38.349 "qid": 0, 00:12:38.349 "state": "enabled", 00:12:38.349 "thread": "nvmf_tgt_poll_group_000" 00:12:38.349 } 00:12:38.349 ]' 00:12:38.349 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:38.349 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:38.349 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:38.349 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:38.349 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:38.349 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.349 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.349 15:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.607 15:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:03:M2ZjMzgxMmViNDllNWZhMzJiZTQwY2NiMGE0Y2JjMDdhMTczNTJiN2FiYmM4MjJhOWM2ZjU4ODFmNWVjNzY5Y40KTZo=: 00:12:39.541 15:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.541 15:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:12:39.541 15:58:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.541 15:58:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.541 15:58:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.541 15:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:39.541 15:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:39.541 15:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:39.542 15:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:39.542 15:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:12:39.542 15:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:39.542 15:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:39.542 15:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:39.542 15:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:39.542 15:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.542 15:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:39.542 15:58:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.542 15:58:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.800 15:58:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.800 15:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:39.800 15:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:40.058 00:12:40.058 15:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:40.058 15:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:40.058 15:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.316 15:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.316 15:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.316 15:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.316 15:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.316 15:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.316 15:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:40.316 { 00:12:40.316 "auth": { 00:12:40.316 "dhgroup": "ffdhe6144", 00:12:40.316 "digest": "sha256", 00:12:40.316 "state": "completed" 00:12:40.316 }, 00:12:40.316 "cntlid": 33, 00:12:40.316 "listen_address": { 00:12:40.316 "adrfam": "IPv4", 00:12:40.316 "traddr": "10.0.0.2", 00:12:40.316 "trsvcid": "4420", 00:12:40.316 "trtype": "TCP" 00:12:40.316 }, 00:12:40.316 "peer_address": { 00:12:40.316 "adrfam": "IPv4", 00:12:40.316 "traddr": "10.0.0.1", 00:12:40.316 "trsvcid": "40178", 00:12:40.316 "trtype": "TCP" 00:12:40.316 }, 00:12:40.316 "qid": 0, 00:12:40.316 "state": "enabled", 00:12:40.316 "thread": "nvmf_tgt_poll_group_000" 00:12:40.316 } 00:12:40.316 ]' 00:12:40.316 15:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:40.573 15:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:40.573 15:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:40.573 15:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:40.573 15:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:40.573 15:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.573 15:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.573 15:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.831 15:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:00:ZTU1OWIxZjUyYjhkZDJjNTgwZmMwMmZjOGQ1ZDJiMGQ0NzJkMDAxODE2MjA2Mjk0oRut5A==: --dhchap-ctrl-secret DHHC-1:03:MGY3Yjg3NzgyZjU4MjU3YThjMDJkZDIzZTA1MGM2YTNkZWNhMzQ1MGZhYzk2Y2Q3MjZmMGZjZWRjZmEzYTNhNhKHh4Y=: 00:12:41.763 15:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.763 15:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:12:41.763 15:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.763 15:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.763 15:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.763 15:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:41.763 15:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:41.763 15:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:42.035 15:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:12:42.035 15:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:42.035 15:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:42.035 15:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:42.035 15:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:42.035 15:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.035 15:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.035 15:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.035 15:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.035 15:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.035 15:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.035 15:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.601 00:12:42.601 15:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:42.601 15:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:42.601 15:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.859 15:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.859 15:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.859 15:58:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.859 15:58:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.859 15:58:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.859 15:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:42.859 { 00:12:42.859 "auth": { 00:12:42.859 "dhgroup": "ffdhe6144", 00:12:42.859 "digest": "sha256", 00:12:42.859 "state": "completed" 00:12:42.859 }, 00:12:42.859 "cntlid": 35, 00:12:42.859 "listen_address": { 00:12:42.859 "adrfam": "IPv4", 00:12:42.859 "traddr": "10.0.0.2", 00:12:42.859 "trsvcid": "4420", 00:12:42.859 "trtype": "TCP" 00:12:42.859 }, 00:12:42.859 "peer_address": { 00:12:42.859 "adrfam": "IPv4", 00:12:42.859 "traddr": "10.0.0.1", 00:12:42.859 "trsvcid": "40210", 00:12:42.859 "trtype": "TCP" 00:12:42.859 }, 00:12:42.859 "qid": 0, 00:12:42.859 "state": "enabled", 00:12:42.859 "thread": "nvmf_tgt_poll_group_000" 00:12:42.859 } 00:12:42.859 ]' 00:12:42.859 15:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:42.859 15:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:42.859 15:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:42.859 15:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:42.859 15:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:43.116 15:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.116 15:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.116 15:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.375 15:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:01:YWE5MWU2NmU2NjdkNWI4Yjc1MDhlNDNmMDRjNzYwNzAFx80v: --dhchap-ctrl-secret DHHC-1:02:N2E5NjBiMTc0NGM5MzI0ZjM4MmNjNjM1OTMwNTkxZGRiYmJlYjY4MWFhODhiOWIzJerZQg==: 00:12:43.941 15:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.941 15:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:12:43.941 15:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.941 15:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.941 15:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.941 15:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:43.941 15:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:43.941 15:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:44.199 15:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:12:44.200 15:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:44.200 15:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:44.200 15:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:44.200 15:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:44.200 15:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.200 15:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.200 15:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.200 15:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.200 15:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.200 15:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.200 15:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.766 00:12:44.767 15:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:44.767 15:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.767 15:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:45.025 15:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.025 15:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:45.025 15:58:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.025 15:58:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.025 15:58:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.025 15:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:45.025 { 00:12:45.025 "auth": { 00:12:45.025 "dhgroup": "ffdhe6144", 00:12:45.025 "digest": "sha256", 00:12:45.025 "state": "completed" 00:12:45.025 }, 00:12:45.025 "cntlid": 37, 00:12:45.025 "listen_address": { 00:12:45.025 "adrfam": "IPv4", 00:12:45.025 "traddr": "10.0.0.2", 00:12:45.025 "trsvcid": "4420", 00:12:45.025 "trtype": "TCP" 00:12:45.025 }, 00:12:45.025 "peer_address": { 00:12:45.025 "adrfam": "IPv4", 00:12:45.025 "traddr": "10.0.0.1", 00:12:45.025 "trsvcid": "40226", 00:12:45.025 "trtype": "TCP" 00:12:45.025 }, 00:12:45.025 "qid": 0, 00:12:45.025 "state": "enabled", 00:12:45.025 "thread": "nvmf_tgt_poll_group_000" 00:12:45.025 } 00:12:45.025 ]' 00:12:45.025 15:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:45.025 15:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:45.025 15:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:45.283 15:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:45.283 15:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:45.283 15:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.283 15:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.283 15:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:45.541 15:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:02:NWFmZTVkMzVmYjRjNmZlNDljOTdiYTRjMWEzNjQxNTFiZTM1MDc3MTU0MGU1YmJlMUz6Eg==: --dhchap-ctrl-secret DHHC-1:01:MTVmNGNjMDAwN2ZlYWZkOTA2ZjdiMjAyNTA3NDQ0NGIYN1ne: 00:12:46.107 15:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:46.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:46.107 15:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:12:46.107 15:58:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.107 15:58:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.107 15:58:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.107 15:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:46.107 15:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:46.107 15:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:46.365 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:12:46.365 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:46.365 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:46.365 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:46.365 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:46.365 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.365 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key3 00:12:46.365 15:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.365 15:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.365 15:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.365 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:46.365 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:46.930 00:12:46.930 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:46.930 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:46.930 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.186 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.186 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:47.186 15:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.186 15:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.186 15:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.186 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:47.186 { 00:12:47.186 "auth": { 00:12:47.186 "dhgroup": "ffdhe6144", 00:12:47.186 "digest": "sha256", 00:12:47.186 "state": "completed" 00:12:47.186 }, 00:12:47.186 "cntlid": 39, 00:12:47.186 "listen_address": { 00:12:47.186 "adrfam": "IPv4", 00:12:47.186 "traddr": "10.0.0.2", 00:12:47.186 "trsvcid": "4420", 00:12:47.186 "trtype": "TCP" 00:12:47.186 }, 00:12:47.186 "peer_address": { 00:12:47.186 "adrfam": "IPv4", 00:12:47.186 "traddr": "10.0.0.1", 00:12:47.186 "trsvcid": "42426", 00:12:47.186 "trtype": "TCP" 00:12:47.186 }, 00:12:47.186 "qid": 0, 00:12:47.186 "state": "enabled", 00:12:47.186 "thread": "nvmf_tgt_poll_group_000" 00:12:47.186 } 00:12:47.186 ]' 00:12:47.186 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:47.442 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:47.442 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:47.442 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:47.442 15:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:47.442 15:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.442 15:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.443 15:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.699 15:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:03:M2ZjMzgxMmViNDllNWZhMzJiZTQwY2NiMGE0Y2JjMDdhMTczNTJiN2FiYmM4MjJhOWM2ZjU4ODFmNWVjNzY5Y40KTZo=: 00:12:48.630 15:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.631 15:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:12:48.631 15:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.631 15:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.631 15:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.631 15:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:48.631 15:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:48.631 15:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:48.631 15:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:48.631 15:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:12:48.631 15:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:48.631 15:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:48.631 15:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:48.631 15:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:48.631 15:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.631 15:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.631 15:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.631 15:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.631 15:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.631 15:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.631 15:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:49.561 00:12:49.561 15:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:49.561 15:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.561 15:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:49.561 15:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.561 15:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:49.561 15:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.561 15:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.561 15:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.561 15:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:49.561 { 00:12:49.561 "auth": { 00:12:49.561 "dhgroup": "ffdhe8192", 00:12:49.561 "digest": "sha256", 00:12:49.561 "state": "completed" 00:12:49.561 }, 00:12:49.561 "cntlid": 41, 00:12:49.561 "listen_address": { 00:12:49.561 "adrfam": "IPv4", 00:12:49.561 "traddr": "10.0.0.2", 00:12:49.561 "trsvcid": "4420", 00:12:49.561 "trtype": "TCP" 00:12:49.561 }, 00:12:49.561 "peer_address": { 00:12:49.561 "adrfam": "IPv4", 00:12:49.561 "traddr": "10.0.0.1", 00:12:49.561 "trsvcid": "42438", 00:12:49.561 "trtype": "TCP" 00:12:49.561 }, 00:12:49.561 "qid": 0, 00:12:49.561 "state": "enabled", 00:12:49.561 "thread": "nvmf_tgt_poll_group_000" 00:12:49.561 } 00:12:49.561 ]' 00:12:49.561 15:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:49.817 15:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:49.817 15:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:49.817 15:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:49.817 15:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:49.817 15:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:49.817 15:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:49.817 15:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:50.074 15:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:00:ZTU1OWIxZjUyYjhkZDJjNTgwZmMwMmZjOGQ1ZDJiMGQ0NzJkMDAxODE2MjA2Mjk0oRut5A==: --dhchap-ctrl-secret DHHC-1:03:MGY3Yjg3NzgyZjU4MjU3YThjMDJkZDIzZTA1MGM2YTNkZWNhMzQ1MGZhYzk2Y2Q3MjZmMGZjZWRjZmEzYTNhNhKHh4Y=: 00:12:51.007 15:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:51.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:51.007 15:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:12:51.007 15:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.007 15:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.007 15:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.007 15:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:51.007 15:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:51.007 15:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:51.007 15:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:12:51.007 15:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:51.007 15:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:51.007 15:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:51.007 15:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:51.007 15:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:51.008 15:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.008 15:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.008 15:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.008 15:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.008 15:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.008 15:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.942 00:12:51.942 15:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:51.942 15:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:51.942 15:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.942 15:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.942 15:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.942 15:58:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.942 15:58:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.942 15:58:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.942 15:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:51.942 { 00:12:51.942 "auth": { 00:12:51.942 "dhgroup": "ffdhe8192", 00:12:51.942 "digest": "sha256", 00:12:51.942 "state": "completed" 00:12:51.942 }, 00:12:51.942 "cntlid": 43, 00:12:51.942 "listen_address": { 00:12:51.942 "adrfam": "IPv4", 00:12:51.942 "traddr": "10.0.0.2", 00:12:51.942 "trsvcid": "4420", 00:12:51.942 "trtype": "TCP" 00:12:51.942 }, 00:12:51.942 "peer_address": { 00:12:51.942 "adrfam": "IPv4", 00:12:51.942 "traddr": "10.0.0.1", 00:12:51.942 "trsvcid": "42462", 00:12:51.942 "trtype": "TCP" 00:12:51.942 }, 00:12:51.942 "qid": 0, 00:12:51.942 "state": "enabled", 00:12:51.942 "thread": "nvmf_tgt_poll_group_000" 00:12:51.942 } 00:12:51.942 ]' 00:12:51.942 15:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:52.199 15:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:52.199 15:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:52.199 15:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:52.199 15:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:52.200 15:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:52.200 15:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:52.200 15:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:52.458 15:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:01:YWE5MWU2NmU2NjdkNWI4Yjc1MDhlNDNmMDRjNzYwNzAFx80v: --dhchap-ctrl-secret DHHC-1:02:N2E5NjBiMTc0NGM5MzI0ZjM4MmNjNjM1OTMwNTkxZGRiYmJlYjY4MWFhODhiOWIzJerZQg==: 00:12:53.025 15:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:53.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:53.025 15:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:12:53.283 15:58:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.283 15:58:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.283 15:58:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.283 15:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:53.283 15:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:53.284 15:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:53.284 15:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:12:53.284 15:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:53.284 15:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:53.284 15:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:53.284 15:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:53.284 15:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:53.284 15:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:53.284 15:58:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.284 15:58:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.541 15:58:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.541 15:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:53.542 15:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:54.105 00:12:54.106 15:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:54.106 15:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:54.106 15:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:54.362 15:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:54.362 15:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:54.362 15:58:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.362 15:58:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.363 15:58:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.363 15:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:54.363 { 00:12:54.363 "auth": { 00:12:54.363 "dhgroup": "ffdhe8192", 00:12:54.363 "digest": "sha256", 00:12:54.363 "state": "completed" 00:12:54.363 }, 00:12:54.363 "cntlid": 45, 00:12:54.363 "listen_address": { 00:12:54.363 "adrfam": "IPv4", 00:12:54.363 "traddr": "10.0.0.2", 00:12:54.363 "trsvcid": "4420", 00:12:54.363 "trtype": "TCP" 00:12:54.363 }, 00:12:54.363 "peer_address": { 00:12:54.363 "adrfam": "IPv4", 00:12:54.363 "traddr": "10.0.0.1", 00:12:54.363 "trsvcid": "42494", 00:12:54.363 "trtype": "TCP" 00:12:54.363 }, 00:12:54.363 "qid": 0, 00:12:54.363 "state": "enabled", 00:12:54.363 "thread": "nvmf_tgt_poll_group_000" 00:12:54.363 } 00:12:54.363 ]' 00:12:54.363 15:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:54.363 15:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:54.363 15:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:54.363 15:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:54.363 15:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:54.363 15:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:54.363 15:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:54.363 15:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.620 15:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:02:NWFmZTVkMzVmYjRjNmZlNDljOTdiYTRjMWEzNjQxNTFiZTM1MDc3MTU0MGU1YmJlMUz6Eg==: --dhchap-ctrl-secret DHHC-1:01:MTVmNGNjMDAwN2ZlYWZkOTA2ZjdiMjAyNTA3NDQ0NGIYN1ne: 00:12:55.553 15:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:55.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:55.553 15:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:12:55.553 15:58:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.553 15:58:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.553 15:58:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.553 15:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:55.553 15:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:55.553 15:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:55.811 15:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:12:55.811 15:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:55.811 15:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:55.811 15:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:55.811 15:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:55.811 15:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:55.811 15:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key3 00:12:55.811 15:58:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.811 15:58:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.811 15:58:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.811 15:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:55.811 15:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:56.377 00:12:56.377 15:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:56.377 15:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:56.377 15:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:56.634 15:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:56.634 15:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:56.635 15:58:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.635 15:58:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.635 15:58:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.635 15:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:56.635 { 00:12:56.635 "auth": { 00:12:56.635 "dhgroup": "ffdhe8192", 00:12:56.635 "digest": "sha256", 00:12:56.635 "state": "completed" 00:12:56.635 }, 00:12:56.635 "cntlid": 47, 00:12:56.635 "listen_address": { 00:12:56.635 "adrfam": "IPv4", 00:12:56.635 "traddr": "10.0.0.2", 00:12:56.635 "trsvcid": "4420", 00:12:56.635 "trtype": "TCP" 00:12:56.635 }, 00:12:56.635 "peer_address": { 00:12:56.635 "adrfam": "IPv4", 00:12:56.635 "traddr": "10.0.0.1", 00:12:56.635 "trsvcid": "42874", 00:12:56.635 "trtype": "TCP" 00:12:56.635 }, 00:12:56.635 "qid": 0, 00:12:56.635 "state": "enabled", 00:12:56.635 "thread": "nvmf_tgt_poll_group_000" 00:12:56.635 } 00:12:56.635 ]' 00:12:56.635 15:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:56.893 15:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:56.893 15:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:56.893 15:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:56.893 15:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:56.893 15:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:56.893 15:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:56.893 15:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.151 15:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:03:M2ZjMzgxMmViNDllNWZhMzJiZTQwY2NiMGE0Y2JjMDdhMTczNTJiN2FiYmM4MjJhOWM2ZjU4ODFmNWVjNzY5Y40KTZo=: 00:12:58.085 15:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:58.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:58.085 15:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:12:58.085 15:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.085 15:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.085 15:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.085 15:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:12:58.085 15:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:58.085 15:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:58.085 15:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:58.085 15:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:58.085 15:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:12:58.085 15:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:58.085 15:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:58.085 15:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:58.085 15:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:58.085 15:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:58.085 15:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:58.085 15:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.085 15:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.085 15:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.085 15:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:58.085 15:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:58.651 00:12:58.651 15:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:58.651 15:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:58.651 15:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:58.910 15:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:58.910 15:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:58.910 15:58:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.910 15:58:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.910 15:58:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.910 15:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:58.910 { 00:12:58.910 "auth": { 00:12:58.910 "dhgroup": "null", 00:12:58.910 "digest": "sha384", 00:12:58.910 "state": "completed" 00:12:58.910 }, 00:12:58.910 "cntlid": 49, 00:12:58.910 "listen_address": { 00:12:58.910 "adrfam": "IPv4", 00:12:58.910 "traddr": "10.0.0.2", 00:12:58.910 "trsvcid": "4420", 00:12:58.910 "trtype": "TCP" 00:12:58.910 }, 00:12:58.910 "peer_address": { 00:12:58.910 "adrfam": "IPv4", 00:12:58.910 "traddr": "10.0.0.1", 00:12:58.910 "trsvcid": "42900", 00:12:58.910 "trtype": "TCP" 00:12:58.910 }, 00:12:58.910 "qid": 0, 00:12:58.910 "state": "enabled", 00:12:58.910 "thread": "nvmf_tgt_poll_group_000" 00:12:58.910 } 00:12:58.910 ]' 00:12:58.910 15:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:58.910 15:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:58.910 15:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:58.910 15:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:58.910 15:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:58.910 15:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:58.910 15:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:58.910 15:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.477 15:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:00:ZTU1OWIxZjUyYjhkZDJjNTgwZmMwMmZjOGQ1ZDJiMGQ0NzJkMDAxODE2MjA2Mjk0oRut5A==: --dhchap-ctrl-secret DHHC-1:03:MGY3Yjg3NzgyZjU4MjU3YThjMDJkZDIzZTA1MGM2YTNkZWNhMzQ1MGZhYzk2Y2Q3MjZmMGZjZWRjZmEzYTNhNhKHh4Y=: 00:13:00.411 15:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:00.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:00.411 15:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:13:00.411 15:58:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.411 15:58:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.411 15:58:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.411 15:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:00.411 15:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:00.411 15:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:00.411 15:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:13:00.411 15:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:00.411 15:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:00.411 15:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:00.411 15:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:00.411 15:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.411 15:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:00.411 15:58:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.411 15:58:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.411 15:58:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.411 15:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:00.411 15:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:00.977 00:13:00.977 15:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:00.977 15:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:00.977 15:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:01.544 15:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.544 15:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:01.544 15:58:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.544 15:58:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.544 15:58:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.544 15:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:01.544 { 00:13:01.544 "auth": { 00:13:01.544 "dhgroup": "null", 00:13:01.544 "digest": "sha384", 00:13:01.544 "state": "completed" 00:13:01.544 }, 00:13:01.544 "cntlid": 51, 00:13:01.544 "listen_address": { 00:13:01.544 "adrfam": "IPv4", 00:13:01.544 "traddr": "10.0.0.2", 00:13:01.544 "trsvcid": "4420", 00:13:01.544 "trtype": "TCP" 00:13:01.544 }, 00:13:01.544 "peer_address": { 00:13:01.544 "adrfam": "IPv4", 00:13:01.544 "traddr": "10.0.0.1", 00:13:01.544 "trsvcid": "42928", 00:13:01.544 "trtype": "TCP" 00:13:01.544 }, 00:13:01.544 "qid": 0, 00:13:01.544 "state": "enabled", 00:13:01.544 "thread": "nvmf_tgt_poll_group_000" 00:13:01.544 } 00:13:01.544 ]' 00:13:01.544 15:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:01.544 15:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:01.544 15:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:01.544 15:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:01.544 15:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:01.544 15:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.544 15:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.544 15:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.802 15:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:01:YWE5MWU2NmU2NjdkNWI4Yjc1MDhlNDNmMDRjNzYwNzAFx80v: --dhchap-ctrl-secret DHHC-1:02:N2E5NjBiMTc0NGM5MzI0ZjM4MmNjNjM1OTMwNTkxZGRiYmJlYjY4MWFhODhiOWIzJerZQg==: 00:13:02.736 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:02.736 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:02.736 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:13:02.736 15:58:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.736 15:58:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.736 15:58:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.736 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:02.736 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:02.736 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:02.995 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:13:02.995 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:02.995 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:02.995 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:02.995 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:02.995 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:02.995 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:02.995 15:58:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.995 15:58:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.995 15:58:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.995 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:02.995 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:03.278 00:13:03.278 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:03.278 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:03.278 15:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:03.535 15:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:03.535 15:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:03.535 15:58:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.535 15:58:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.535 15:58:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.535 15:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:03.535 { 00:13:03.535 "auth": { 00:13:03.535 "dhgroup": "null", 00:13:03.535 "digest": "sha384", 00:13:03.536 "state": "completed" 00:13:03.536 }, 00:13:03.536 "cntlid": 53, 00:13:03.536 "listen_address": { 00:13:03.536 "adrfam": "IPv4", 00:13:03.536 "traddr": "10.0.0.2", 00:13:03.536 "trsvcid": "4420", 00:13:03.536 "trtype": "TCP" 00:13:03.536 }, 00:13:03.536 "peer_address": { 00:13:03.536 "adrfam": "IPv4", 00:13:03.536 "traddr": "10.0.0.1", 00:13:03.536 "trsvcid": "42942", 00:13:03.536 "trtype": "TCP" 00:13:03.536 }, 00:13:03.536 "qid": 0, 00:13:03.536 "state": "enabled", 00:13:03.536 "thread": "nvmf_tgt_poll_group_000" 00:13:03.536 } 00:13:03.536 ]' 00:13:03.536 15:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:03.536 15:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:03.536 15:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:03.536 15:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:03.536 15:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:03.793 15:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:03.793 15:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:03.793 15:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.052 15:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:02:NWFmZTVkMzVmYjRjNmZlNDljOTdiYTRjMWEzNjQxNTFiZTM1MDc3MTU0MGU1YmJlMUz6Eg==: --dhchap-ctrl-secret DHHC-1:01:MTVmNGNjMDAwN2ZlYWZkOTA2ZjdiMjAyNTA3NDQ0NGIYN1ne: 00:13:04.617 15:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:04.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:04.617 15:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:13:04.617 15:58:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.617 15:58:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.617 15:58:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.617 15:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:04.617 15:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:04.617 15:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:04.893 15:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:13:04.893 15:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:04.893 15:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:04.893 15:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:04.893 15:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:04.893 15:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:04.893 15:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key3 00:13:04.893 15:58:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.893 15:58:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.893 15:58:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.893 15:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:04.893 15:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:05.492 00:13:05.492 15:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:05.492 15:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:05.492 15:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:05.492 15:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:05.750 15:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:05.750 15:58:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.750 15:58:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.750 15:58:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.750 15:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:05.750 { 00:13:05.750 "auth": { 00:13:05.750 "dhgroup": "null", 00:13:05.750 "digest": "sha384", 00:13:05.750 "state": "completed" 00:13:05.750 }, 00:13:05.750 "cntlid": 55, 00:13:05.750 "listen_address": { 00:13:05.750 "adrfam": "IPv4", 00:13:05.750 "traddr": "10.0.0.2", 00:13:05.750 "trsvcid": "4420", 00:13:05.750 "trtype": "TCP" 00:13:05.750 }, 00:13:05.750 "peer_address": { 00:13:05.750 "adrfam": "IPv4", 00:13:05.750 "traddr": "10.0.0.1", 00:13:05.750 "trsvcid": "55758", 00:13:05.750 "trtype": "TCP" 00:13:05.750 }, 00:13:05.750 "qid": 0, 00:13:05.750 "state": "enabled", 00:13:05.750 "thread": "nvmf_tgt_poll_group_000" 00:13:05.750 } 00:13:05.750 ]' 00:13:05.750 15:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:05.750 15:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:05.750 15:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:05.750 15:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:05.750 15:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:05.750 15:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:05.750 15:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:05.750 15:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:06.007 15:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:03:M2ZjMzgxMmViNDllNWZhMzJiZTQwY2NiMGE0Y2JjMDdhMTczNTJiN2FiYmM4MjJhOWM2ZjU4ODFmNWVjNzY5Y40KTZo=: 00:13:06.573 15:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:06.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:06.573 15:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:13:06.831 15:59:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.831 15:59:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.831 15:59:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.831 15:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:06.831 15:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:06.831 15:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:06.831 15:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:07.089 15:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:13:07.089 15:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:07.089 15:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:07.089 15:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:07.089 15:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:07.089 15:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:07.089 15:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:07.089 15:59:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.089 15:59:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.089 15:59:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.089 15:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:07.089 15:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:07.348 00:13:07.348 15:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:07.348 15:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:07.348 15:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:07.607 15:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:07.607 15:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:07.607 15:59:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.607 15:59:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.607 15:59:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.607 15:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:07.607 { 00:13:07.607 "auth": { 00:13:07.607 "dhgroup": "ffdhe2048", 00:13:07.607 "digest": "sha384", 00:13:07.607 "state": "completed" 00:13:07.607 }, 00:13:07.607 "cntlid": 57, 00:13:07.607 "listen_address": { 00:13:07.607 "adrfam": "IPv4", 00:13:07.607 "traddr": "10.0.0.2", 00:13:07.607 "trsvcid": "4420", 00:13:07.607 "trtype": "TCP" 00:13:07.607 }, 00:13:07.607 "peer_address": { 00:13:07.607 "adrfam": "IPv4", 00:13:07.607 "traddr": "10.0.0.1", 00:13:07.607 "trsvcid": "55784", 00:13:07.607 "trtype": "TCP" 00:13:07.607 }, 00:13:07.607 "qid": 0, 00:13:07.607 "state": "enabled", 00:13:07.607 "thread": "nvmf_tgt_poll_group_000" 00:13:07.607 } 00:13:07.607 ]' 00:13:07.607 15:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:07.607 15:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:07.607 15:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:07.607 15:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:07.607 15:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:07.866 15:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:07.866 15:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:07.866 15:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:08.125 15:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:00:ZTU1OWIxZjUyYjhkZDJjNTgwZmMwMmZjOGQ1ZDJiMGQ0NzJkMDAxODE2MjA2Mjk0oRut5A==: --dhchap-ctrl-secret DHHC-1:03:MGY3Yjg3NzgyZjU4MjU3YThjMDJkZDIzZTA1MGM2YTNkZWNhMzQ1MGZhYzk2Y2Q3MjZmMGZjZWRjZmEzYTNhNhKHh4Y=: 00:13:08.691 15:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:08.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:08.691 15:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:13:08.691 15:59:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.691 15:59:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.691 15:59:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.691 15:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:08.691 15:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:08.691 15:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:08.949 15:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:13:08.949 15:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:08.949 15:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:08.949 15:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:08.949 15:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:08.949 15:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:08.949 15:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:08.949 15:59:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.949 15:59:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.949 15:59:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.949 15:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:08.949 15:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:09.531 00:13:09.531 15:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:09.531 15:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:09.531 15:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:09.531 15:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.531 15:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:09.531 15:59:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.531 15:59:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.531 15:59:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.531 15:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:09.531 { 00:13:09.531 "auth": { 00:13:09.531 "dhgroup": "ffdhe2048", 00:13:09.531 "digest": "sha384", 00:13:09.531 "state": "completed" 00:13:09.531 }, 00:13:09.531 "cntlid": 59, 00:13:09.531 "listen_address": { 00:13:09.531 "adrfam": "IPv4", 00:13:09.531 "traddr": "10.0.0.2", 00:13:09.531 "trsvcid": "4420", 00:13:09.531 "trtype": "TCP" 00:13:09.531 }, 00:13:09.531 "peer_address": { 00:13:09.531 "adrfam": "IPv4", 00:13:09.531 "traddr": "10.0.0.1", 00:13:09.531 "trsvcid": "55808", 00:13:09.531 "trtype": "TCP" 00:13:09.531 }, 00:13:09.531 "qid": 0, 00:13:09.531 "state": "enabled", 00:13:09.531 "thread": "nvmf_tgt_poll_group_000" 00:13:09.531 } 00:13:09.531 ]' 00:13:09.531 15:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:09.789 15:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:09.789 15:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:09.789 15:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:09.789 15:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:09.789 15:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:09.789 15:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:09.789 15:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:10.046 15:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:01:YWE5MWU2NmU2NjdkNWI4Yjc1MDhlNDNmMDRjNzYwNzAFx80v: --dhchap-ctrl-secret DHHC-1:02:N2E5NjBiMTc0NGM5MzI0ZjM4MmNjNjM1OTMwNTkxZGRiYmJlYjY4MWFhODhiOWIzJerZQg==: 00:13:10.981 15:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:10.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:10.981 15:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:13:10.981 15:59:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.981 15:59:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.981 15:59:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.981 15:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:10.981 15:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:10.981 15:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:10.981 15:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:13:10.981 15:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:10.981 15:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:10.981 15:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:10.981 15:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:10.981 15:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.981 15:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:10.981 15:59:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.981 15:59:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.981 15:59:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.981 15:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:10.981 15:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:11.548 00:13:11.548 15:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:11.548 15:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:11.548 15:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.807 15:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.807 15:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:11.807 15:59:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.807 15:59:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.807 15:59:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.807 15:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:11.807 { 00:13:11.807 "auth": { 00:13:11.807 "dhgroup": "ffdhe2048", 00:13:11.807 "digest": "sha384", 00:13:11.807 "state": "completed" 00:13:11.807 }, 00:13:11.807 "cntlid": 61, 00:13:11.807 "listen_address": { 00:13:11.807 "adrfam": "IPv4", 00:13:11.807 "traddr": "10.0.0.2", 00:13:11.807 "trsvcid": "4420", 00:13:11.807 "trtype": "TCP" 00:13:11.807 }, 00:13:11.807 "peer_address": { 00:13:11.807 "adrfam": "IPv4", 00:13:11.807 "traddr": "10.0.0.1", 00:13:11.807 "trsvcid": "55836", 00:13:11.807 "trtype": "TCP" 00:13:11.807 }, 00:13:11.807 "qid": 0, 00:13:11.807 "state": "enabled", 00:13:11.807 "thread": "nvmf_tgt_poll_group_000" 00:13:11.807 } 00:13:11.807 ]' 00:13:11.807 15:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:11.807 15:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:11.807 15:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:11.807 15:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:11.807 15:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:11.807 15:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.807 15:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.807 15:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:12.374 15:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:02:NWFmZTVkMzVmYjRjNmZlNDljOTdiYTRjMWEzNjQxNTFiZTM1MDc3MTU0MGU1YmJlMUz6Eg==: --dhchap-ctrl-secret DHHC-1:01:MTVmNGNjMDAwN2ZlYWZkOTA2ZjdiMjAyNTA3NDQ0NGIYN1ne: 00:13:12.941 15:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.941 15:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:13:12.941 15:59:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.941 15:59:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.941 15:59:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.941 15:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:12.941 15:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:12.941 15:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:13.199 15:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:13:13.199 15:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:13.199 15:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:13.199 15:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:13.199 15:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:13.200 15:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:13.200 15:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key3 00:13:13.200 15:59:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.200 15:59:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.200 15:59:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.200 15:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:13.200 15:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:13.767 00:13:13.767 15:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:13.767 15:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:13.767 15:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:14.061 15:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:14.061 15:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:14.061 15:59:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.061 15:59:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.061 15:59:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.061 15:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:14.061 { 00:13:14.061 "auth": { 00:13:14.061 "dhgroup": "ffdhe2048", 00:13:14.061 "digest": "sha384", 00:13:14.061 "state": "completed" 00:13:14.061 }, 00:13:14.061 "cntlid": 63, 00:13:14.061 "listen_address": { 00:13:14.061 "adrfam": "IPv4", 00:13:14.061 "traddr": "10.0.0.2", 00:13:14.061 "trsvcid": "4420", 00:13:14.061 "trtype": "TCP" 00:13:14.061 }, 00:13:14.061 "peer_address": { 00:13:14.061 "adrfam": "IPv4", 00:13:14.061 "traddr": "10.0.0.1", 00:13:14.061 "trsvcid": "55852", 00:13:14.061 "trtype": "TCP" 00:13:14.061 }, 00:13:14.061 "qid": 0, 00:13:14.061 "state": "enabled", 00:13:14.061 "thread": "nvmf_tgt_poll_group_000" 00:13:14.061 } 00:13:14.061 ]' 00:13:14.061 15:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:14.061 15:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:14.061 15:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:14.061 15:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:14.061 15:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:14.061 15:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:14.061 15:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:14.061 15:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:14.319 15:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:03:M2ZjMzgxMmViNDllNWZhMzJiZTQwY2NiMGE0Y2JjMDdhMTczNTJiN2FiYmM4MjJhOWM2ZjU4ODFmNWVjNzY5Y40KTZo=: 00:13:15.255 15:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:15.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:15.255 15:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:13:15.255 15:59:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.255 15:59:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.255 15:59:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.255 15:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:15.255 15:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:15.255 15:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:15.255 15:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:15.255 15:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:13:15.255 15:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:15.255 15:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:15.255 15:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:15.255 15:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:15.255 15:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:15.255 15:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:15.255 15:59:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.255 15:59:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.255 15:59:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.255 15:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:15.255 15:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:15.822 00:13:15.822 15:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:15.822 15:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:15.822 15:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.822 15:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.822 15:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:15.822 15:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.822 15:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.822 15:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.822 15:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:15.822 { 00:13:15.822 "auth": { 00:13:15.822 "dhgroup": "ffdhe3072", 00:13:15.822 "digest": "sha384", 00:13:15.822 "state": "completed" 00:13:15.822 }, 00:13:15.822 "cntlid": 65, 00:13:15.822 "listen_address": { 00:13:15.822 "adrfam": "IPv4", 00:13:15.822 "traddr": "10.0.0.2", 00:13:15.822 "trsvcid": "4420", 00:13:15.822 "trtype": "TCP" 00:13:15.822 }, 00:13:15.822 "peer_address": { 00:13:15.822 "adrfam": "IPv4", 00:13:15.822 "traddr": "10.0.0.1", 00:13:15.822 "trsvcid": "41024", 00:13:15.822 "trtype": "TCP" 00:13:15.822 }, 00:13:15.822 "qid": 0, 00:13:15.822 "state": "enabled", 00:13:15.822 "thread": "nvmf_tgt_poll_group_000" 00:13:15.822 } 00:13:15.822 ]' 00:13:15.822 15:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:16.081 15:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:16.081 15:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:16.081 15:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:16.081 15:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:16.081 15:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:16.081 15:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:16.081 15:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:16.339 15:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:00:ZTU1OWIxZjUyYjhkZDJjNTgwZmMwMmZjOGQ1ZDJiMGQ0NzJkMDAxODE2MjA2Mjk0oRut5A==: --dhchap-ctrl-secret DHHC-1:03:MGY3Yjg3NzgyZjU4MjU3YThjMDJkZDIzZTA1MGM2YTNkZWNhMzQ1MGZhYzk2Y2Q3MjZmMGZjZWRjZmEzYTNhNhKHh4Y=: 00:13:16.909 15:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.909 15:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:13:16.909 15:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.909 15:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.909 15:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.909 15:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:16.909 15:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:16.909 15:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:17.183 15:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:13:17.183 15:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:17.183 15:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:17.183 15:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:17.184 15:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:17.184 15:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:17.184 15:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:17.184 15:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.184 15:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.184 15:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.184 15:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:17.184 15:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:17.442 00:13:17.702 15:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:17.702 15:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:17.702 15:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.702 15:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.702 15:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.702 15:59:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.702 15:59:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.702 15:59:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.702 15:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:17.702 { 00:13:17.702 "auth": { 00:13:17.702 "dhgroup": "ffdhe3072", 00:13:17.702 "digest": "sha384", 00:13:17.702 "state": "completed" 00:13:17.702 }, 00:13:17.702 "cntlid": 67, 00:13:17.702 "listen_address": { 00:13:17.702 "adrfam": "IPv4", 00:13:17.702 "traddr": "10.0.0.2", 00:13:17.702 "trsvcid": "4420", 00:13:17.702 "trtype": "TCP" 00:13:17.702 }, 00:13:17.702 "peer_address": { 00:13:17.702 "adrfam": "IPv4", 00:13:17.702 "traddr": "10.0.0.1", 00:13:17.702 "trsvcid": "41054", 00:13:17.702 "trtype": "TCP" 00:13:17.702 }, 00:13:17.702 "qid": 0, 00:13:17.702 "state": "enabled", 00:13:17.702 "thread": "nvmf_tgt_poll_group_000" 00:13:17.702 } 00:13:17.702 ]' 00:13:17.961 15:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:17.961 15:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:17.961 15:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:17.961 15:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:17.961 15:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:17.961 15:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.961 15:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.961 15:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.219 15:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:01:YWE5MWU2NmU2NjdkNWI4Yjc1MDhlNDNmMDRjNzYwNzAFx80v: --dhchap-ctrl-secret DHHC-1:02:N2E5NjBiMTc0NGM5MzI0ZjM4MmNjNjM1OTMwNTkxZGRiYmJlYjY4MWFhODhiOWIzJerZQg==: 00:13:19.154 15:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:19.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:19.154 15:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:13:19.154 15:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.154 15:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.154 15:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.154 15:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:19.154 15:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:19.154 15:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:19.154 15:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:13:19.154 15:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:19.154 15:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:19.154 15:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:19.154 15:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:19.154 15:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:19.154 15:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:19.154 15:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.154 15:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.154 15:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.154 15:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:19.154 15:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:19.756 00:13:19.756 15:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:19.756 15:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:19.756 15:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:20.014 15:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:20.014 15:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:20.014 15:59:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.014 15:59:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.014 15:59:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.014 15:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:20.014 { 00:13:20.014 "auth": { 00:13:20.014 "dhgroup": "ffdhe3072", 00:13:20.014 "digest": "sha384", 00:13:20.014 "state": "completed" 00:13:20.014 }, 00:13:20.014 "cntlid": 69, 00:13:20.014 "listen_address": { 00:13:20.014 "adrfam": "IPv4", 00:13:20.014 "traddr": "10.0.0.2", 00:13:20.014 "trsvcid": "4420", 00:13:20.014 "trtype": "TCP" 00:13:20.014 }, 00:13:20.014 "peer_address": { 00:13:20.014 "adrfam": "IPv4", 00:13:20.014 "traddr": "10.0.0.1", 00:13:20.014 "trsvcid": "41080", 00:13:20.014 "trtype": "TCP" 00:13:20.014 }, 00:13:20.014 "qid": 0, 00:13:20.014 "state": "enabled", 00:13:20.014 "thread": "nvmf_tgt_poll_group_000" 00:13:20.014 } 00:13:20.014 ]' 00:13:20.014 15:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:20.014 15:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:20.014 15:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:20.014 15:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:20.014 15:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:20.014 15:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:20.014 15:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:20.014 15:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:20.273 15:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:02:NWFmZTVkMzVmYjRjNmZlNDljOTdiYTRjMWEzNjQxNTFiZTM1MDc3MTU0MGU1YmJlMUz6Eg==: --dhchap-ctrl-secret DHHC-1:01:MTVmNGNjMDAwN2ZlYWZkOTA2ZjdiMjAyNTA3NDQ0NGIYN1ne: 00:13:20.840 15:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.840 15:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:13:20.840 15:59:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.840 15:59:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.840 15:59:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.840 15:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:20.840 15:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:20.840 15:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:21.098 15:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:13:21.098 15:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:21.098 15:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:21.098 15:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:21.098 15:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:21.099 15:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:21.099 15:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key3 00:13:21.099 15:59:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.099 15:59:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.099 15:59:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.099 15:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:21.099 15:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:21.689 00:13:21.689 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:21.689 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:21.689 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.950 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.950 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.950 15:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.950 15:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.950 15:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.950 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:21.950 { 00:13:21.950 "auth": { 00:13:21.950 "dhgroup": "ffdhe3072", 00:13:21.950 "digest": "sha384", 00:13:21.950 "state": "completed" 00:13:21.950 }, 00:13:21.950 "cntlid": 71, 00:13:21.950 "listen_address": { 00:13:21.950 "adrfam": "IPv4", 00:13:21.950 "traddr": "10.0.0.2", 00:13:21.950 "trsvcid": "4420", 00:13:21.950 "trtype": "TCP" 00:13:21.950 }, 00:13:21.950 "peer_address": { 00:13:21.950 "adrfam": "IPv4", 00:13:21.950 "traddr": "10.0.0.1", 00:13:21.950 "trsvcid": "41102", 00:13:21.950 "trtype": "TCP" 00:13:21.950 }, 00:13:21.950 "qid": 0, 00:13:21.950 "state": "enabled", 00:13:21.950 "thread": "nvmf_tgt_poll_group_000" 00:13:21.950 } 00:13:21.950 ]' 00:13:21.950 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:21.950 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:21.950 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:21.950 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:21.950 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:21.950 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.950 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.950 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:22.208 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:03:M2ZjMzgxMmViNDllNWZhMzJiZTQwY2NiMGE0Y2JjMDdhMTczNTJiN2FiYmM4MjJhOWM2ZjU4ODFmNWVjNzY5Y40KTZo=: 00:13:23.142 15:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:23.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:23.142 15:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:13:23.142 15:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.142 15:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.142 15:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.142 15:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:23.142 15:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:23.142 15:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:23.142 15:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:23.401 15:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:13:23.401 15:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:23.401 15:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:23.401 15:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:23.401 15:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:23.401 15:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:23.401 15:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:23.401 15:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.401 15:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.401 15:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.401 15:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:23.401 15:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:23.660 00:13:23.660 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:23.660 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.660 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:23.919 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.919 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.919 15:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.919 15:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.919 15:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.919 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:23.919 { 00:13:23.919 "auth": { 00:13:23.919 "dhgroup": "ffdhe4096", 00:13:23.919 "digest": "sha384", 00:13:23.919 "state": "completed" 00:13:23.919 }, 00:13:23.919 "cntlid": 73, 00:13:23.919 "listen_address": { 00:13:23.919 "adrfam": "IPv4", 00:13:23.919 "traddr": "10.0.0.2", 00:13:23.919 "trsvcid": "4420", 00:13:23.919 "trtype": "TCP" 00:13:23.919 }, 00:13:23.919 "peer_address": { 00:13:23.919 "adrfam": "IPv4", 00:13:23.919 "traddr": "10.0.0.1", 00:13:23.919 "trsvcid": "41128", 00:13:23.919 "trtype": "TCP" 00:13:23.919 }, 00:13:23.919 "qid": 0, 00:13:23.919 "state": "enabled", 00:13:23.919 "thread": "nvmf_tgt_poll_group_000" 00:13:23.919 } 00:13:23.919 ]' 00:13:23.919 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:23.919 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:23.919 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:23.919 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:23.919 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:24.177 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:24.177 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:24.177 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:24.437 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:00:ZTU1OWIxZjUyYjhkZDJjNTgwZmMwMmZjOGQ1ZDJiMGQ0NzJkMDAxODE2MjA2Mjk0oRut5A==: --dhchap-ctrl-secret DHHC-1:03:MGY3Yjg3NzgyZjU4MjU3YThjMDJkZDIzZTA1MGM2YTNkZWNhMzQ1MGZhYzk2Y2Q3MjZmMGZjZWRjZmEzYTNhNhKHh4Y=: 00:13:25.002 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:25.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:25.002 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:13:25.002 15:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.002 15:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.003 15:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.003 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:25.003 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:25.003 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:25.259 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:13:25.260 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:25.260 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:25.260 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:25.260 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:25.260 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:25.260 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:25.260 15:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.260 15:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.260 15:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.260 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:25.260 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:25.517 00:13:25.517 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:25.517 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:25.517 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:26.082 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.082 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:26.082 15:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.082 15:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.082 15:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.082 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:26.082 { 00:13:26.082 "auth": { 00:13:26.082 "dhgroup": "ffdhe4096", 00:13:26.082 "digest": "sha384", 00:13:26.082 "state": "completed" 00:13:26.082 }, 00:13:26.082 "cntlid": 75, 00:13:26.082 "listen_address": { 00:13:26.082 "adrfam": "IPv4", 00:13:26.082 "traddr": "10.0.0.2", 00:13:26.082 "trsvcid": "4420", 00:13:26.082 "trtype": "TCP" 00:13:26.082 }, 00:13:26.082 "peer_address": { 00:13:26.082 "adrfam": "IPv4", 00:13:26.082 "traddr": "10.0.0.1", 00:13:26.082 "trsvcid": "49774", 00:13:26.082 "trtype": "TCP" 00:13:26.082 }, 00:13:26.082 "qid": 0, 00:13:26.082 "state": "enabled", 00:13:26.082 "thread": "nvmf_tgt_poll_group_000" 00:13:26.082 } 00:13:26.082 ]' 00:13:26.082 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:26.082 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:26.082 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:26.082 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:26.082 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:26.082 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.082 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.082 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.339 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:01:YWE5MWU2NmU2NjdkNWI4Yjc1MDhlNDNmMDRjNzYwNzAFx80v: --dhchap-ctrl-secret DHHC-1:02:N2E5NjBiMTc0NGM5MzI0ZjM4MmNjNjM1OTMwNTkxZGRiYmJlYjY4MWFhODhiOWIzJerZQg==: 00:13:26.903 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:26.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:26.903 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:13:26.903 15:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.903 15:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.903 15:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.903 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:26.903 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:26.903 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:27.467 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:13:27.467 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:27.467 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:27.467 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:27.467 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:27.467 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:27.467 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:27.467 15:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.467 15:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.468 15:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.468 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:27.468 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:27.725 00:13:27.725 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:27.725 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:27.725 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:27.983 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:27.983 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:27.983 15:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.983 15:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.983 15:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.983 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:27.983 { 00:13:27.983 "auth": { 00:13:27.983 "dhgroup": "ffdhe4096", 00:13:27.983 "digest": "sha384", 00:13:27.983 "state": "completed" 00:13:27.983 }, 00:13:27.983 "cntlid": 77, 00:13:27.983 "listen_address": { 00:13:27.983 "adrfam": "IPv4", 00:13:27.983 "traddr": "10.0.0.2", 00:13:27.983 "trsvcid": "4420", 00:13:27.983 "trtype": "TCP" 00:13:27.983 }, 00:13:27.983 "peer_address": { 00:13:27.983 "adrfam": "IPv4", 00:13:27.983 "traddr": "10.0.0.1", 00:13:27.983 "trsvcid": "49806", 00:13:27.983 "trtype": "TCP" 00:13:27.983 }, 00:13:27.983 "qid": 0, 00:13:27.983 "state": "enabled", 00:13:27.983 "thread": "nvmf_tgt_poll_group_000" 00:13:27.983 } 00:13:27.983 ]' 00:13:27.983 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:27.983 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:27.983 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:27.983 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:27.983 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:28.241 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:28.241 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:28.241 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.498 15:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:02:NWFmZTVkMzVmYjRjNmZlNDljOTdiYTRjMWEzNjQxNTFiZTM1MDc3MTU0MGU1YmJlMUz6Eg==: --dhchap-ctrl-secret DHHC-1:01:MTVmNGNjMDAwN2ZlYWZkOTA2ZjdiMjAyNTA3NDQ0NGIYN1ne: 00:13:29.065 15:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:29.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:29.065 15:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:13:29.065 15:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.065 15:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.065 15:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.065 15:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:29.065 15:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:29.065 15:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:29.323 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:13:29.323 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:29.323 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:29.323 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:29.323 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:29.323 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:29.323 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key3 00:13:29.323 15:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.323 15:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.323 15:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.323 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:29.323 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:29.887 00:13:29.888 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:29.888 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:29.888 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:30.145 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:30.145 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:30.145 15:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.145 15:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.145 15:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.145 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:30.145 { 00:13:30.145 "auth": { 00:13:30.145 "dhgroup": "ffdhe4096", 00:13:30.145 "digest": "sha384", 00:13:30.145 "state": "completed" 00:13:30.145 }, 00:13:30.145 "cntlid": 79, 00:13:30.145 "listen_address": { 00:13:30.145 "adrfam": "IPv4", 00:13:30.145 "traddr": "10.0.0.2", 00:13:30.145 "trsvcid": "4420", 00:13:30.145 "trtype": "TCP" 00:13:30.145 }, 00:13:30.145 "peer_address": { 00:13:30.145 "adrfam": "IPv4", 00:13:30.145 "traddr": "10.0.0.1", 00:13:30.145 "trsvcid": "49830", 00:13:30.145 "trtype": "TCP" 00:13:30.145 }, 00:13:30.145 "qid": 0, 00:13:30.145 "state": "enabled", 00:13:30.145 "thread": "nvmf_tgt_poll_group_000" 00:13:30.145 } 00:13:30.145 ]' 00:13:30.145 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:30.145 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:30.145 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:30.145 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:30.145 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:30.145 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:30.145 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:30.145 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.403 15:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:03:M2ZjMzgxMmViNDllNWZhMzJiZTQwY2NiMGE0Y2JjMDdhMTczNTJiN2FiYmM4MjJhOWM2ZjU4ODFmNWVjNzY5Y40KTZo=: 00:13:31.337 15:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:31.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:31.337 15:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:13:31.337 15:59:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.337 15:59:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.337 15:59:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.337 15:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:31.337 15:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:31.337 15:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:31.337 15:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:31.595 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:13:31.595 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:31.595 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:31.595 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:31.595 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:31.595 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:31.595 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:31.595 15:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.595 15:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.595 15:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.595 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:31.595 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:32.161 00:13:32.161 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:32.162 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:32.162 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:32.162 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:32.162 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:32.162 15:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.162 15:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.420 15:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.420 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:32.420 { 00:13:32.420 "auth": { 00:13:32.420 "dhgroup": "ffdhe6144", 00:13:32.420 "digest": "sha384", 00:13:32.420 "state": "completed" 00:13:32.420 }, 00:13:32.420 "cntlid": 81, 00:13:32.420 "listen_address": { 00:13:32.420 "adrfam": "IPv4", 00:13:32.420 "traddr": "10.0.0.2", 00:13:32.420 "trsvcid": "4420", 00:13:32.420 "trtype": "TCP" 00:13:32.420 }, 00:13:32.420 "peer_address": { 00:13:32.420 "adrfam": "IPv4", 00:13:32.420 "traddr": "10.0.0.1", 00:13:32.420 "trsvcid": "49850", 00:13:32.420 "trtype": "TCP" 00:13:32.420 }, 00:13:32.420 "qid": 0, 00:13:32.420 "state": "enabled", 00:13:32.420 "thread": "nvmf_tgt_poll_group_000" 00:13:32.420 } 00:13:32.420 ]' 00:13:32.420 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:32.420 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:32.420 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:32.420 15:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:32.420 15:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:32.420 15:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:32.420 15:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:32.420 15:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.678 15:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:00:ZTU1OWIxZjUyYjhkZDJjNTgwZmMwMmZjOGQ1ZDJiMGQ0NzJkMDAxODE2MjA2Mjk0oRut5A==: --dhchap-ctrl-secret DHHC-1:03:MGY3Yjg3NzgyZjU4MjU3YThjMDJkZDIzZTA1MGM2YTNkZWNhMzQ1MGZhYzk2Y2Q3MjZmMGZjZWRjZmEzYTNhNhKHh4Y=: 00:13:33.639 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:33.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:33.639 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:13:33.639 15:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.639 15:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.639 15:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.639 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:33.639 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:33.639 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:33.897 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:13:33.897 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:33.897 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:33.897 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:33.897 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:33.897 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:33.897 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.897 15:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.897 15:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.897 15:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.897 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.897 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:34.464 00:13:34.464 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:34.464 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:34.464 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:34.722 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.722 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:34.722 15:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.722 15:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.722 15:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.722 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:34.722 { 00:13:34.722 "auth": { 00:13:34.722 "dhgroup": "ffdhe6144", 00:13:34.722 "digest": "sha384", 00:13:34.722 "state": "completed" 00:13:34.722 }, 00:13:34.722 "cntlid": 83, 00:13:34.722 "listen_address": { 00:13:34.722 "adrfam": "IPv4", 00:13:34.722 "traddr": "10.0.0.2", 00:13:34.722 "trsvcid": "4420", 00:13:34.722 "trtype": "TCP" 00:13:34.722 }, 00:13:34.722 "peer_address": { 00:13:34.722 "adrfam": "IPv4", 00:13:34.722 "traddr": "10.0.0.1", 00:13:34.722 "trsvcid": "49872", 00:13:34.722 "trtype": "TCP" 00:13:34.722 }, 00:13:34.722 "qid": 0, 00:13:34.723 "state": "enabled", 00:13:34.723 "thread": "nvmf_tgt_poll_group_000" 00:13:34.723 } 00:13:34.723 ]' 00:13:34.723 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:34.723 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:34.723 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:34.723 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:34.723 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:34.723 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.723 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.723 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.981 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:01:YWE5MWU2NmU2NjdkNWI4Yjc1MDhlNDNmMDRjNzYwNzAFx80v: --dhchap-ctrl-secret DHHC-1:02:N2E5NjBiMTc0NGM5MzI0ZjM4MmNjNjM1OTMwNTkxZGRiYmJlYjY4MWFhODhiOWIzJerZQg==: 00:13:35.918 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:35.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:35.918 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:13:35.918 15:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.918 15:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.918 15:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.918 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:35.918 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:35.918 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:35.918 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:13:35.918 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:35.918 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:35.918 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:35.918 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:35.918 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:35.918 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:35.918 15:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.918 15:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.918 15:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.918 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:35.918 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:36.485 00:13:36.485 15:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:36.485 15:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:36.485 15:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:36.744 15:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:36.744 15:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:36.744 15:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.744 15:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.744 15:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.744 15:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:36.744 { 00:13:36.744 "auth": { 00:13:36.744 "dhgroup": "ffdhe6144", 00:13:36.744 "digest": "sha384", 00:13:36.744 "state": "completed" 00:13:36.744 }, 00:13:36.744 "cntlid": 85, 00:13:36.744 "listen_address": { 00:13:36.744 "adrfam": "IPv4", 00:13:36.744 "traddr": "10.0.0.2", 00:13:36.744 "trsvcid": "4420", 00:13:36.744 "trtype": "TCP" 00:13:36.744 }, 00:13:36.744 "peer_address": { 00:13:36.744 "adrfam": "IPv4", 00:13:36.744 "traddr": "10.0.0.1", 00:13:36.744 "trsvcid": "50674", 00:13:36.744 "trtype": "TCP" 00:13:36.744 }, 00:13:36.744 "qid": 0, 00:13:36.744 "state": "enabled", 00:13:36.744 "thread": "nvmf_tgt_poll_group_000" 00:13:36.744 } 00:13:36.744 ]' 00:13:36.744 15:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:36.744 15:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:36.744 15:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:37.026 15:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:37.026 15:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:37.026 15:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:37.026 15:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:37.026 15:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:37.285 15:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:02:NWFmZTVkMzVmYjRjNmZlNDljOTdiYTRjMWEzNjQxNTFiZTM1MDc3MTU0MGU1YmJlMUz6Eg==: --dhchap-ctrl-secret DHHC-1:01:MTVmNGNjMDAwN2ZlYWZkOTA2ZjdiMjAyNTA3NDQ0NGIYN1ne: 00:13:37.850 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:37.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:37.850 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:13:37.850 15:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.850 15:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.850 15:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.850 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:37.850 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:37.850 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:38.108 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:13:38.108 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:38.108 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:38.108 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:38.108 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:38.108 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:38.108 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key3 00:13:38.108 15:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.108 15:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.108 15:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.108 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:38.108 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:38.673 00:13:38.673 15:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:38.673 15:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:38.673 15:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:38.932 15:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:38.932 15:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:38.932 15:59:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.932 15:59:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.932 15:59:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.932 15:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:38.932 { 00:13:38.932 "auth": { 00:13:38.932 "dhgroup": "ffdhe6144", 00:13:38.932 "digest": "sha384", 00:13:38.932 "state": "completed" 00:13:38.932 }, 00:13:38.932 "cntlid": 87, 00:13:38.932 "listen_address": { 00:13:38.932 "adrfam": "IPv4", 00:13:38.932 "traddr": "10.0.0.2", 00:13:38.932 "trsvcid": "4420", 00:13:38.932 "trtype": "TCP" 00:13:38.932 }, 00:13:38.932 "peer_address": { 00:13:38.932 "adrfam": "IPv4", 00:13:38.932 "traddr": "10.0.0.1", 00:13:38.932 "trsvcid": "50710", 00:13:38.932 "trtype": "TCP" 00:13:38.932 }, 00:13:38.932 "qid": 0, 00:13:38.932 "state": "enabled", 00:13:38.932 "thread": "nvmf_tgt_poll_group_000" 00:13:38.932 } 00:13:38.932 ]' 00:13:38.932 15:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:38.932 15:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:38.932 15:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:39.190 15:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:39.190 15:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:39.190 15:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.190 15:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.190 15:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:39.449 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:03:M2ZjMzgxMmViNDllNWZhMzJiZTQwY2NiMGE0Y2JjMDdhMTczNTJiN2FiYmM4MjJhOWM2ZjU4ODFmNWVjNzY5Y40KTZo=: 00:13:40.017 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.017 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:13:40.017 15:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.017 15:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.017 15:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.017 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:40.017 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:40.017 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:40.017 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:40.588 15:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:13:40.588 15:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:40.588 15:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:40.588 15:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:40.588 15:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:40.588 15:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:40.588 15:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:40.588 15:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.588 15:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.588 15:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.588 15:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:40.588 15:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:41.158 00:13:41.158 15:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:41.158 15:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.158 15:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:41.416 15:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.416 15:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.416 15:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.416 15:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.416 15:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.416 15:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:41.416 { 00:13:41.416 "auth": { 00:13:41.416 "dhgroup": "ffdhe8192", 00:13:41.416 "digest": "sha384", 00:13:41.416 "state": "completed" 00:13:41.416 }, 00:13:41.416 "cntlid": 89, 00:13:41.416 "listen_address": { 00:13:41.416 "adrfam": "IPv4", 00:13:41.416 "traddr": "10.0.0.2", 00:13:41.416 "trsvcid": "4420", 00:13:41.416 "trtype": "TCP" 00:13:41.416 }, 00:13:41.416 "peer_address": { 00:13:41.416 "adrfam": "IPv4", 00:13:41.416 "traddr": "10.0.0.1", 00:13:41.416 "trsvcid": "50734", 00:13:41.416 "trtype": "TCP" 00:13:41.416 }, 00:13:41.416 "qid": 0, 00:13:41.416 "state": "enabled", 00:13:41.416 "thread": "nvmf_tgt_poll_group_000" 00:13:41.416 } 00:13:41.416 ]' 00:13:41.416 15:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:41.416 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:41.416 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:41.416 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:41.416 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:41.416 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.416 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.416 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:41.982 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:00:ZTU1OWIxZjUyYjhkZDJjNTgwZmMwMmZjOGQ1ZDJiMGQ0NzJkMDAxODE2MjA2Mjk0oRut5A==: --dhchap-ctrl-secret DHHC-1:03:MGY3Yjg3NzgyZjU4MjU3YThjMDJkZDIzZTA1MGM2YTNkZWNhMzQ1MGZhYzk2Y2Q3MjZmMGZjZWRjZmEzYTNhNhKHh4Y=: 00:13:42.551 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.551 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:13:42.551 15:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.551 15:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.551 15:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.551 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:42.551 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:42.551 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:42.815 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:13:42.815 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:42.815 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:42.815 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:42.815 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:42.815 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:42.815 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:42.815 15:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.815 15:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.815 15:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.815 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:42.815 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:43.748 00:13:43.748 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:43.748 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:43.748 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.748 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:43.748 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:43.748 15:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.748 15:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.748 15:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.748 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:43.748 { 00:13:43.748 "auth": { 00:13:43.748 "dhgroup": "ffdhe8192", 00:13:43.748 "digest": "sha384", 00:13:43.748 "state": "completed" 00:13:43.748 }, 00:13:43.748 "cntlid": 91, 00:13:43.748 "listen_address": { 00:13:43.748 "adrfam": "IPv4", 00:13:43.748 "traddr": "10.0.0.2", 00:13:43.748 "trsvcid": "4420", 00:13:43.748 "trtype": "TCP" 00:13:43.748 }, 00:13:43.748 "peer_address": { 00:13:43.748 "adrfam": "IPv4", 00:13:43.748 "traddr": "10.0.0.1", 00:13:43.748 "trsvcid": "50750", 00:13:43.748 "trtype": "TCP" 00:13:43.748 }, 00:13:43.748 "qid": 0, 00:13:43.748 "state": "enabled", 00:13:43.748 "thread": "nvmf_tgt_poll_group_000" 00:13:43.748 } 00:13:43.748 ]' 00:13:43.748 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:44.006 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:44.006 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:44.006 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:44.006 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:44.006 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:44.006 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:44.006 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:44.264 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:01:YWE5MWU2NmU2NjdkNWI4Yjc1MDhlNDNmMDRjNzYwNzAFx80v: --dhchap-ctrl-secret DHHC-1:02:N2E5NjBiMTc0NGM5MzI0ZjM4MmNjNjM1OTMwNTkxZGRiYmJlYjY4MWFhODhiOWIzJerZQg==: 00:13:45.197 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:45.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:45.197 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:13:45.197 15:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.197 15:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.197 15:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.197 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:45.197 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:45.197 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:45.456 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:13:45.456 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:45.456 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:45.456 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:45.456 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:45.456 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:45.456 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:45.456 15:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.456 15:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.456 15:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.456 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:45.456 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:46.023 00:13:46.023 15:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:46.023 15:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:46.023 15:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:46.283 15:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.283 15:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:46.283 15:59:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.283 15:59:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.283 15:59:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.283 15:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:46.283 { 00:13:46.283 "auth": { 00:13:46.283 "dhgroup": "ffdhe8192", 00:13:46.283 "digest": "sha384", 00:13:46.283 "state": "completed" 00:13:46.283 }, 00:13:46.283 "cntlid": 93, 00:13:46.283 "listen_address": { 00:13:46.283 "adrfam": "IPv4", 00:13:46.283 "traddr": "10.0.0.2", 00:13:46.283 "trsvcid": "4420", 00:13:46.283 "trtype": "TCP" 00:13:46.283 }, 00:13:46.283 "peer_address": { 00:13:46.283 "adrfam": "IPv4", 00:13:46.283 "traddr": "10.0.0.1", 00:13:46.283 "trsvcid": "35494", 00:13:46.283 "trtype": "TCP" 00:13:46.283 }, 00:13:46.283 "qid": 0, 00:13:46.283 "state": "enabled", 00:13:46.283 "thread": "nvmf_tgt_poll_group_000" 00:13:46.283 } 00:13:46.283 ]' 00:13:46.283 15:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:46.283 15:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:46.283 15:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:46.541 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:46.541 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:46.541 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:46.541 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:46.541 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:46.799 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:02:NWFmZTVkMzVmYjRjNmZlNDljOTdiYTRjMWEzNjQxNTFiZTM1MDc3MTU0MGU1YmJlMUz6Eg==: --dhchap-ctrl-secret DHHC-1:01:MTVmNGNjMDAwN2ZlYWZkOTA2ZjdiMjAyNTA3NDQ0NGIYN1ne: 00:13:47.734 15:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:47.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:47.734 15:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:13:47.734 15:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.734 15:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.734 15:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.734 15:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:47.734 15:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:47.734 15:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:47.734 15:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:13:47.734 15:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:47.734 15:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:47.734 15:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:47.734 15:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:47.734 15:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:47.734 15:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key3 00:13:47.734 15:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.734 15:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.992 15:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.992 15:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:47.992 15:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:48.558 00:13:48.558 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:48.558 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:48.558 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:48.816 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:48.816 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:48.816 15:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.816 15:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.816 15:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.816 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:48.816 { 00:13:48.816 "auth": { 00:13:48.816 "dhgroup": "ffdhe8192", 00:13:48.816 "digest": "sha384", 00:13:48.816 "state": "completed" 00:13:48.816 }, 00:13:48.816 "cntlid": 95, 00:13:48.816 "listen_address": { 00:13:48.816 "adrfam": "IPv4", 00:13:48.816 "traddr": "10.0.0.2", 00:13:48.816 "trsvcid": "4420", 00:13:48.816 "trtype": "TCP" 00:13:48.816 }, 00:13:48.816 "peer_address": { 00:13:48.816 "adrfam": "IPv4", 00:13:48.816 "traddr": "10.0.0.1", 00:13:48.816 "trsvcid": "35510", 00:13:48.816 "trtype": "TCP" 00:13:48.816 }, 00:13:48.816 "qid": 0, 00:13:48.816 "state": "enabled", 00:13:48.816 "thread": "nvmf_tgt_poll_group_000" 00:13:48.816 } 00:13:48.816 ]' 00:13:48.816 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:48.816 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:48.817 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:48.817 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:49.074 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:49.074 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:49.074 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.074 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:49.333 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:03:M2ZjMzgxMmViNDllNWZhMzJiZTQwY2NiMGE0Y2JjMDdhMTczNTJiN2FiYmM4MjJhOWM2ZjU4ODFmNWVjNzY5Y40KTZo=: 00:13:49.919 15:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:49.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:49.919 15:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:13:49.919 15:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.919 15:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.919 15:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.919 15:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:13:49.919 15:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:49.919 15:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:49.919 15:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:49.919 15:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:50.177 15:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:13:50.177 15:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:50.177 15:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:50.177 15:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:50.177 15:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:50.177 15:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:50.177 15:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:50.177 15:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.177 15:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.177 15:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.177 15:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:50.177 15:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:50.764 00:13:50.764 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:50.764 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:50.764 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.022 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.022 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.022 15:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.022 15:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.022 15:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.022 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:51.022 { 00:13:51.022 "auth": { 00:13:51.022 "dhgroup": "null", 00:13:51.022 "digest": "sha512", 00:13:51.022 "state": "completed" 00:13:51.022 }, 00:13:51.022 "cntlid": 97, 00:13:51.022 "listen_address": { 00:13:51.022 "adrfam": "IPv4", 00:13:51.022 "traddr": "10.0.0.2", 00:13:51.022 "trsvcid": "4420", 00:13:51.022 "trtype": "TCP" 00:13:51.022 }, 00:13:51.022 "peer_address": { 00:13:51.022 "adrfam": "IPv4", 00:13:51.022 "traddr": "10.0.0.1", 00:13:51.022 "trsvcid": "35534", 00:13:51.022 "trtype": "TCP" 00:13:51.022 }, 00:13:51.022 "qid": 0, 00:13:51.022 "state": "enabled", 00:13:51.022 "thread": "nvmf_tgt_poll_group_000" 00:13:51.022 } 00:13:51.022 ]' 00:13:51.022 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:51.022 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:51.022 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:51.022 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:51.022 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:51.022 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:51.022 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:51.022 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:51.587 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:00:ZTU1OWIxZjUyYjhkZDJjNTgwZmMwMmZjOGQ1ZDJiMGQ0NzJkMDAxODE2MjA2Mjk0oRut5A==: --dhchap-ctrl-secret DHHC-1:03:MGY3Yjg3NzgyZjU4MjU3YThjMDJkZDIzZTA1MGM2YTNkZWNhMzQ1MGZhYzk2Y2Q3MjZmMGZjZWRjZmEzYTNhNhKHh4Y=: 00:13:52.154 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:52.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:52.154 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:13:52.154 15:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.154 15:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.154 15:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.154 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:52.154 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:52.154 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:52.413 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:13:52.413 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:52.413 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:52.413 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:52.413 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:52.413 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:52.413 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:52.413 15:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.413 15:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.413 15:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.413 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:52.413 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:52.671 00:13:52.671 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:52.671 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:52.671 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:53.239 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.239 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:53.239 15:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.239 15:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.239 15:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.239 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:53.239 { 00:13:53.239 "auth": { 00:13:53.239 "dhgroup": "null", 00:13:53.239 "digest": "sha512", 00:13:53.239 "state": "completed" 00:13:53.239 }, 00:13:53.239 "cntlid": 99, 00:13:53.239 "listen_address": { 00:13:53.239 "adrfam": "IPv4", 00:13:53.239 "traddr": "10.0.0.2", 00:13:53.239 "trsvcid": "4420", 00:13:53.239 "trtype": "TCP" 00:13:53.239 }, 00:13:53.239 "peer_address": { 00:13:53.239 "adrfam": "IPv4", 00:13:53.239 "traddr": "10.0.0.1", 00:13:53.239 "trsvcid": "35552", 00:13:53.239 "trtype": "TCP" 00:13:53.239 }, 00:13:53.239 "qid": 0, 00:13:53.239 "state": "enabled", 00:13:53.239 "thread": "nvmf_tgt_poll_group_000" 00:13:53.239 } 00:13:53.239 ]' 00:13:53.239 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:53.239 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:53.239 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:53.239 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:53.239 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:53.239 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:53.239 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.239 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:53.497 15:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:01:YWE5MWU2NmU2NjdkNWI4Yjc1MDhlNDNmMDRjNzYwNzAFx80v: --dhchap-ctrl-secret DHHC-1:02:N2E5NjBiMTc0NGM5MzI0ZjM4MmNjNjM1OTMwNTkxZGRiYmJlYjY4MWFhODhiOWIzJerZQg==: 00:13:54.432 15:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:54.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:54.432 15:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:13:54.432 15:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.432 15:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.432 15:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.432 15:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:54.432 15:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:54.432 15:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:54.714 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:13:54.714 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:54.714 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:54.714 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:54.714 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:54.714 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.714 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:54.714 15:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.714 15:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.714 15:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.714 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:54.714 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:55.008 00:13:55.008 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:55.008 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:55.008 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.267 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.267 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.267 15:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.267 15:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.267 15:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.267 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:55.267 { 00:13:55.267 "auth": { 00:13:55.267 "dhgroup": "null", 00:13:55.267 "digest": "sha512", 00:13:55.267 "state": "completed" 00:13:55.267 }, 00:13:55.267 "cntlid": 101, 00:13:55.267 "listen_address": { 00:13:55.267 "adrfam": "IPv4", 00:13:55.267 "traddr": "10.0.0.2", 00:13:55.267 "trsvcid": "4420", 00:13:55.267 "trtype": "TCP" 00:13:55.267 }, 00:13:55.267 "peer_address": { 00:13:55.267 "adrfam": "IPv4", 00:13:55.267 "traddr": "10.0.0.1", 00:13:55.267 "trsvcid": "35582", 00:13:55.267 "trtype": "TCP" 00:13:55.267 }, 00:13:55.267 "qid": 0, 00:13:55.267 "state": "enabled", 00:13:55.267 "thread": "nvmf_tgt_poll_group_000" 00:13:55.267 } 00:13:55.267 ]' 00:13:55.267 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:55.267 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:55.267 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:55.267 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:55.267 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:55.525 15:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.525 15:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.525 15:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.784 15:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:02:NWFmZTVkMzVmYjRjNmZlNDljOTdiYTRjMWEzNjQxNTFiZTM1MDc3MTU0MGU1YmJlMUz6Eg==: --dhchap-ctrl-secret DHHC-1:01:MTVmNGNjMDAwN2ZlYWZkOTA2ZjdiMjAyNTA3NDQ0NGIYN1ne: 00:13:56.350 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.350 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:13:56.350 15:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.350 15:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.350 15:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.350 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:56.350 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:56.350 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:56.915 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:13:56.915 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:56.915 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:56.915 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:56.915 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:56.915 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.915 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key3 00:13:56.915 15:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.915 15:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.915 15:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.916 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:56.916 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:57.174 00:13:57.174 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:57.174 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:57.174 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.432 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.432 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.432 15:59:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.432 15:59:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.432 15:59:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.432 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:57.432 { 00:13:57.432 "auth": { 00:13:57.432 "dhgroup": "null", 00:13:57.432 "digest": "sha512", 00:13:57.432 "state": "completed" 00:13:57.432 }, 00:13:57.432 "cntlid": 103, 00:13:57.432 "listen_address": { 00:13:57.432 "adrfam": "IPv4", 00:13:57.432 "traddr": "10.0.0.2", 00:13:57.432 "trsvcid": "4420", 00:13:57.432 "trtype": "TCP" 00:13:57.432 }, 00:13:57.432 "peer_address": { 00:13:57.432 "adrfam": "IPv4", 00:13:57.432 "traddr": "10.0.0.1", 00:13:57.432 "trsvcid": "56056", 00:13:57.432 "trtype": "TCP" 00:13:57.432 }, 00:13:57.432 "qid": 0, 00:13:57.432 "state": "enabled", 00:13:57.432 "thread": "nvmf_tgt_poll_group_000" 00:13:57.432 } 00:13:57.432 ]' 00:13:57.432 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:57.432 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:57.432 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:57.432 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:57.432 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:57.689 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.689 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.689 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.947 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:03:M2ZjMzgxMmViNDllNWZhMzJiZTQwY2NiMGE0Y2JjMDdhMTczNTJiN2FiYmM4MjJhOWM2ZjU4ODFmNWVjNzY5Y40KTZo=: 00:13:58.511 15:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:58.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:58.511 15:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:13:58.511 15:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.511 15:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.511 15:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.511 15:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:58.511 15:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:58.511 15:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:58.511 15:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:59.077 15:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:13:59.077 15:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:59.077 15:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:59.077 15:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:59.077 15:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:59.077 15:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:59.077 15:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:59.077 15:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.077 15:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.077 15:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.077 15:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:59.077 15:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:59.334 00:13:59.334 15:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:59.334 15:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:59.334 15:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.593 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.593 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:59.593 15:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.593 15:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.593 15:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.593 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:59.593 { 00:13:59.593 "auth": { 00:13:59.593 "dhgroup": "ffdhe2048", 00:13:59.593 "digest": "sha512", 00:13:59.593 "state": "completed" 00:13:59.593 }, 00:13:59.593 "cntlid": 105, 00:13:59.593 "listen_address": { 00:13:59.593 "adrfam": "IPv4", 00:13:59.593 "traddr": "10.0.0.2", 00:13:59.593 "trsvcid": "4420", 00:13:59.593 "trtype": "TCP" 00:13:59.593 }, 00:13:59.593 "peer_address": { 00:13:59.593 "adrfam": "IPv4", 00:13:59.593 "traddr": "10.0.0.1", 00:13:59.593 "trsvcid": "56080", 00:13:59.593 "trtype": "TCP" 00:13:59.593 }, 00:13:59.593 "qid": 0, 00:13:59.593 "state": "enabled", 00:13:59.593 "thread": "nvmf_tgt_poll_group_000" 00:13:59.593 } 00:13:59.593 ]' 00:13:59.593 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:59.853 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:59.853 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:59.853 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:59.853 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:59.853 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:59.853 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.853 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:00.110 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:00:ZTU1OWIxZjUyYjhkZDJjNTgwZmMwMmZjOGQ1ZDJiMGQ0NzJkMDAxODE2MjA2Mjk0oRut5A==: --dhchap-ctrl-secret DHHC-1:03:MGY3Yjg3NzgyZjU4MjU3YThjMDJkZDIzZTA1MGM2YTNkZWNhMzQ1MGZhYzk2Y2Q3MjZmMGZjZWRjZmEzYTNhNhKHh4Y=: 00:14:01.042 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:01.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:01.042 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:14:01.042 15:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.042 15:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.042 15:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.042 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:01.042 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:01.042 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:01.300 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:14:01.300 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:01.300 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:01.300 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:01.300 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:01.300 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.300 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:01.300 15:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.300 15:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.300 15:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.300 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:01.300 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:01.558 00:14:01.558 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:01.558 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:01.558 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.816 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.816 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.816 15:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.816 15:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.816 15:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.816 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:01.816 { 00:14:01.816 "auth": { 00:14:01.816 "dhgroup": "ffdhe2048", 00:14:01.816 "digest": "sha512", 00:14:01.816 "state": "completed" 00:14:01.816 }, 00:14:01.816 "cntlid": 107, 00:14:01.816 "listen_address": { 00:14:01.816 "adrfam": "IPv4", 00:14:01.816 "traddr": "10.0.0.2", 00:14:01.816 "trsvcid": "4420", 00:14:01.816 "trtype": "TCP" 00:14:01.816 }, 00:14:01.816 "peer_address": { 00:14:01.816 "adrfam": "IPv4", 00:14:01.816 "traddr": "10.0.0.1", 00:14:01.816 "trsvcid": "56092", 00:14:01.816 "trtype": "TCP" 00:14:01.816 }, 00:14:01.816 "qid": 0, 00:14:01.816 "state": "enabled", 00:14:01.816 "thread": "nvmf_tgt_poll_group_000" 00:14:01.816 } 00:14:01.816 ]' 00:14:01.816 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:01.816 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:01.816 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:01.816 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:01.816 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:02.074 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:02.074 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.074 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.332 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:01:YWE5MWU2NmU2NjdkNWI4Yjc1MDhlNDNmMDRjNzYwNzAFx80v: --dhchap-ctrl-secret DHHC-1:02:N2E5NjBiMTc0NGM5MzI0ZjM4MmNjNjM1OTMwNTkxZGRiYmJlYjY4MWFhODhiOWIzJerZQg==: 00:14:03.266 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:03.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:03.266 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:14:03.266 15:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.266 15:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.266 15:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.266 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:03.266 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:03.266 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:03.266 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:14:03.266 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:03.266 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:03.266 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:03.266 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:03.266 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:03.266 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:03.266 15:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.266 15:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.266 15:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.266 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:03.266 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:03.833 00:14:03.833 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:03.833 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.833 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:04.148 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.148 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.148 15:59:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.148 15:59:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.148 15:59:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.148 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:04.148 { 00:14:04.148 "auth": { 00:14:04.148 "dhgroup": "ffdhe2048", 00:14:04.148 "digest": "sha512", 00:14:04.148 "state": "completed" 00:14:04.148 }, 00:14:04.148 "cntlid": 109, 00:14:04.148 "listen_address": { 00:14:04.148 "adrfam": "IPv4", 00:14:04.148 "traddr": "10.0.0.2", 00:14:04.148 "trsvcid": "4420", 00:14:04.148 "trtype": "TCP" 00:14:04.148 }, 00:14:04.148 "peer_address": { 00:14:04.148 "adrfam": "IPv4", 00:14:04.148 "traddr": "10.0.0.1", 00:14:04.148 "trsvcid": "56120", 00:14:04.148 "trtype": "TCP" 00:14:04.148 }, 00:14:04.148 "qid": 0, 00:14:04.148 "state": "enabled", 00:14:04.148 "thread": "nvmf_tgt_poll_group_000" 00:14:04.148 } 00:14:04.148 ]' 00:14:04.148 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:04.148 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:04.148 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:04.148 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:04.148 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:04.148 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:04.148 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:04.148 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.450 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:02:NWFmZTVkMzVmYjRjNmZlNDljOTdiYTRjMWEzNjQxNTFiZTM1MDc3MTU0MGU1YmJlMUz6Eg==: --dhchap-ctrl-secret DHHC-1:01:MTVmNGNjMDAwN2ZlYWZkOTA2ZjdiMjAyNTA3NDQ0NGIYN1ne: 00:14:05.384 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:05.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:05.384 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:14:05.384 15:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.384 15:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.384 15:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.384 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:05.384 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:05.384 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:05.642 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:14:05.642 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:05.642 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:05.642 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:05.642 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:05.642 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.642 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key3 00:14:05.642 15:59:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.642 15:59:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.642 15:59:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.642 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:05.642 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:05.900 00:14:05.900 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:05.900 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:05.900 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.159 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.159 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.159 15:59:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.159 15:59:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.159 15:59:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.159 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:06.159 { 00:14:06.159 "auth": { 00:14:06.159 "dhgroup": "ffdhe2048", 00:14:06.159 "digest": "sha512", 00:14:06.159 "state": "completed" 00:14:06.159 }, 00:14:06.159 "cntlid": 111, 00:14:06.159 "listen_address": { 00:14:06.159 "adrfam": "IPv4", 00:14:06.159 "traddr": "10.0.0.2", 00:14:06.159 "trsvcid": "4420", 00:14:06.159 "trtype": "TCP" 00:14:06.159 }, 00:14:06.159 "peer_address": { 00:14:06.159 "adrfam": "IPv4", 00:14:06.159 "traddr": "10.0.0.1", 00:14:06.159 "trsvcid": "37240", 00:14:06.159 "trtype": "TCP" 00:14:06.159 }, 00:14:06.159 "qid": 0, 00:14:06.159 "state": "enabled", 00:14:06.159 "thread": "nvmf_tgt_poll_group_000" 00:14:06.159 } 00:14:06.159 ]' 00:14:06.159 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:06.417 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:06.417 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:06.417 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:06.417 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:06.417 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:06.417 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:06.417 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.675 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:03:M2ZjMzgxMmViNDllNWZhMzJiZTQwY2NiMGE0Y2JjMDdhMTczNTJiN2FiYmM4MjJhOWM2ZjU4ODFmNWVjNzY5Y40KTZo=: 00:14:07.609 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:07.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:07.609 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:14:07.609 16:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.609 16:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.609 16:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.609 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:07.609 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:07.609 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:07.609 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:07.868 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:14:07.868 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:07.868 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:07.868 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:07.868 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:07.868 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:07.868 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:07.868 16:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.868 16:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.868 16:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.868 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:07.868 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:08.126 00:14:08.126 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:08.126 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.126 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:08.382 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:08.382 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:08.382 16:00:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.382 16:00:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.382 16:00:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.382 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:08.382 { 00:14:08.382 "auth": { 00:14:08.382 "dhgroup": "ffdhe3072", 00:14:08.382 "digest": "sha512", 00:14:08.382 "state": "completed" 00:14:08.382 }, 00:14:08.382 "cntlid": 113, 00:14:08.382 "listen_address": { 00:14:08.382 "adrfam": "IPv4", 00:14:08.382 "traddr": "10.0.0.2", 00:14:08.382 "trsvcid": "4420", 00:14:08.382 "trtype": "TCP" 00:14:08.382 }, 00:14:08.382 "peer_address": { 00:14:08.382 "adrfam": "IPv4", 00:14:08.382 "traddr": "10.0.0.1", 00:14:08.382 "trsvcid": "37258", 00:14:08.382 "trtype": "TCP" 00:14:08.382 }, 00:14:08.382 "qid": 0, 00:14:08.382 "state": "enabled", 00:14:08.382 "thread": "nvmf_tgt_poll_group_000" 00:14:08.382 } 00:14:08.382 ]' 00:14:08.382 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:08.382 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:08.382 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:08.638 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:08.638 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:08.638 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:08.638 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:08.639 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.895 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:00:ZTU1OWIxZjUyYjhkZDJjNTgwZmMwMmZjOGQ1ZDJiMGQ0NzJkMDAxODE2MjA2Mjk0oRut5A==: --dhchap-ctrl-secret DHHC-1:03:MGY3Yjg3NzgyZjU4MjU3YThjMDJkZDIzZTA1MGM2YTNkZWNhMzQ1MGZhYzk2Y2Q3MjZmMGZjZWRjZmEzYTNhNhKHh4Y=: 00:14:09.459 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.459 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:14:09.459 16:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.459 16:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.459 16:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.459 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:09.459 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:09.459 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:09.717 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:14:09.717 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:09.717 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:09.717 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:09.717 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:09.717 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:09.717 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:09.717 16:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.717 16:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.717 16:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.717 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:09.718 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:10.284 00:14:10.284 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:10.284 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.284 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:10.542 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.542 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:10.542 16:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.542 16:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.542 16:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.542 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:10.543 { 00:14:10.543 "auth": { 00:14:10.543 "dhgroup": "ffdhe3072", 00:14:10.543 "digest": "sha512", 00:14:10.543 "state": "completed" 00:14:10.543 }, 00:14:10.543 "cntlid": 115, 00:14:10.543 "listen_address": { 00:14:10.543 "adrfam": "IPv4", 00:14:10.543 "traddr": "10.0.0.2", 00:14:10.543 "trsvcid": "4420", 00:14:10.543 "trtype": "TCP" 00:14:10.543 }, 00:14:10.543 "peer_address": { 00:14:10.543 "adrfam": "IPv4", 00:14:10.543 "traddr": "10.0.0.1", 00:14:10.543 "trsvcid": "37284", 00:14:10.543 "trtype": "TCP" 00:14:10.543 }, 00:14:10.543 "qid": 0, 00:14:10.543 "state": "enabled", 00:14:10.543 "thread": "nvmf_tgt_poll_group_000" 00:14:10.543 } 00:14:10.543 ]' 00:14:10.543 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:10.543 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:10.543 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:10.543 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:10.543 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:10.543 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:10.543 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:10.543 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.108 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:01:YWE5MWU2NmU2NjdkNWI4Yjc1MDhlNDNmMDRjNzYwNzAFx80v: --dhchap-ctrl-secret DHHC-1:02:N2E5NjBiMTc0NGM5MzI0ZjM4MmNjNjM1OTMwNTkxZGRiYmJlYjY4MWFhODhiOWIzJerZQg==: 00:14:11.674 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:11.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:11.674 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:14:11.674 16:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.674 16:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.674 16:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.674 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:11.674 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:11.674 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:11.933 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:14:11.933 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:11.933 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:11.933 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:11.933 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:11.933 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:11.933 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:11.933 16:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.933 16:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.933 16:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.933 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:11.933 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:12.191 00:14:12.449 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:12.449 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:12.449 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:12.707 16:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:12.707 16:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:12.707 16:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.707 16:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.707 16:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.707 16:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:12.707 { 00:14:12.707 "auth": { 00:14:12.707 "dhgroup": "ffdhe3072", 00:14:12.707 "digest": "sha512", 00:14:12.707 "state": "completed" 00:14:12.707 }, 00:14:12.707 "cntlid": 117, 00:14:12.707 "listen_address": { 00:14:12.707 "adrfam": "IPv4", 00:14:12.707 "traddr": "10.0.0.2", 00:14:12.707 "trsvcid": "4420", 00:14:12.707 "trtype": "TCP" 00:14:12.707 }, 00:14:12.707 "peer_address": { 00:14:12.707 "adrfam": "IPv4", 00:14:12.707 "traddr": "10.0.0.1", 00:14:12.707 "trsvcid": "37314", 00:14:12.707 "trtype": "TCP" 00:14:12.707 }, 00:14:12.707 "qid": 0, 00:14:12.707 "state": "enabled", 00:14:12.707 "thread": "nvmf_tgt_poll_group_000" 00:14:12.707 } 00:14:12.707 ]' 00:14:12.707 16:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:12.707 16:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:12.707 16:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:12.707 16:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:12.707 16:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:12.707 16:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:12.707 16:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:12.707 16:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.273 16:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:02:NWFmZTVkMzVmYjRjNmZlNDljOTdiYTRjMWEzNjQxNTFiZTM1MDc3MTU0MGU1YmJlMUz6Eg==: --dhchap-ctrl-secret DHHC-1:01:MTVmNGNjMDAwN2ZlYWZkOTA2ZjdiMjAyNTA3NDQ0NGIYN1ne: 00:14:13.839 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:13.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:13.840 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:14:13.840 16:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.840 16:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.840 16:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.840 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:13.840 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:13.840 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:14.097 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:14:14.097 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:14.097 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:14.097 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:14.097 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:14.097 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.097 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key3 00:14:14.097 16:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.097 16:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.097 16:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.097 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:14.097 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:14.356 00:14:14.613 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:14.613 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.613 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:14.874 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.874 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.874 16:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.874 16:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.874 16:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.874 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:14.874 { 00:14:14.874 "auth": { 00:14:14.874 "dhgroup": "ffdhe3072", 00:14:14.874 "digest": "sha512", 00:14:14.874 "state": "completed" 00:14:14.874 }, 00:14:14.874 "cntlid": 119, 00:14:14.874 "listen_address": { 00:14:14.874 "adrfam": "IPv4", 00:14:14.874 "traddr": "10.0.0.2", 00:14:14.874 "trsvcid": "4420", 00:14:14.874 "trtype": "TCP" 00:14:14.874 }, 00:14:14.874 "peer_address": { 00:14:14.874 "adrfam": "IPv4", 00:14:14.874 "traddr": "10.0.0.1", 00:14:14.874 "trsvcid": "37348", 00:14:14.874 "trtype": "TCP" 00:14:14.874 }, 00:14:14.874 "qid": 0, 00:14:14.874 "state": "enabled", 00:14:14.874 "thread": "nvmf_tgt_poll_group_000" 00:14:14.874 } 00:14:14.874 ]' 00:14:14.874 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:14.874 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:14.874 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:14.874 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:14.874 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:14.874 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.874 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.874 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.456 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:03:M2ZjMzgxMmViNDllNWZhMzJiZTQwY2NiMGE0Y2JjMDdhMTczNTJiN2FiYmM4MjJhOWM2ZjU4ODFmNWVjNzY5Y40KTZo=: 00:14:16.042 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:16.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:16.043 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:14:16.043 16:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.043 16:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.043 16:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.043 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:16.043 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:16.043 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:16.043 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:16.312 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:14:16.313 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:16.313 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:16.313 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:16.313 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:16.313 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.313 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.313 16:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.313 16:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.313 16:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.313 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.313 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.886 00:14:16.886 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:16.886 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.886 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:17.152 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.152 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.152 16:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.152 16:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.152 16:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.152 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:17.152 { 00:14:17.152 "auth": { 00:14:17.152 "dhgroup": "ffdhe4096", 00:14:17.152 "digest": "sha512", 00:14:17.152 "state": "completed" 00:14:17.152 }, 00:14:17.152 "cntlid": 121, 00:14:17.152 "listen_address": { 00:14:17.152 "adrfam": "IPv4", 00:14:17.152 "traddr": "10.0.0.2", 00:14:17.152 "trsvcid": "4420", 00:14:17.152 "trtype": "TCP" 00:14:17.152 }, 00:14:17.152 "peer_address": { 00:14:17.152 "adrfam": "IPv4", 00:14:17.152 "traddr": "10.0.0.1", 00:14:17.152 "trsvcid": "46060", 00:14:17.152 "trtype": "TCP" 00:14:17.152 }, 00:14:17.152 "qid": 0, 00:14:17.152 "state": "enabled", 00:14:17.152 "thread": "nvmf_tgt_poll_group_000" 00:14:17.152 } 00:14:17.152 ]' 00:14:17.152 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:17.152 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:17.152 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:17.152 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:17.152 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:17.152 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.152 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.152 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.418 16:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:00:ZTU1OWIxZjUyYjhkZDJjNTgwZmMwMmZjOGQ1ZDJiMGQ0NzJkMDAxODE2MjA2Mjk0oRut5A==: --dhchap-ctrl-secret DHHC-1:03:MGY3Yjg3NzgyZjU4MjU3YThjMDJkZDIzZTA1MGM2YTNkZWNhMzQ1MGZhYzk2Y2Q3MjZmMGZjZWRjZmEzYTNhNhKHh4Y=: 00:14:18.367 16:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.367 16:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:14:18.367 16:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.367 16:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.367 16:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.367 16:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:18.367 16:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:18.367 16:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:18.629 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:14:18.629 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:18.629 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:18.629 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:18.629 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:18.629 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.629 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.629 16:00:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.629 16:00:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.629 16:00:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.629 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.629 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.893 00:14:19.154 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:19.154 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.154 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:19.415 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.415 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.415 16:00:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.415 16:00:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.415 16:00:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.415 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:19.415 { 00:14:19.415 "auth": { 00:14:19.415 "dhgroup": "ffdhe4096", 00:14:19.415 "digest": "sha512", 00:14:19.415 "state": "completed" 00:14:19.415 }, 00:14:19.415 "cntlid": 123, 00:14:19.415 "listen_address": { 00:14:19.415 "adrfam": "IPv4", 00:14:19.415 "traddr": "10.0.0.2", 00:14:19.415 "trsvcid": "4420", 00:14:19.415 "trtype": "TCP" 00:14:19.415 }, 00:14:19.415 "peer_address": { 00:14:19.415 "adrfam": "IPv4", 00:14:19.415 "traddr": "10.0.0.1", 00:14:19.415 "trsvcid": "46078", 00:14:19.415 "trtype": "TCP" 00:14:19.415 }, 00:14:19.415 "qid": 0, 00:14:19.415 "state": "enabled", 00:14:19.415 "thread": "nvmf_tgt_poll_group_000" 00:14:19.415 } 00:14:19.415 ]' 00:14:19.415 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:19.415 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:19.415 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:19.415 16:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:19.415 16:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:19.415 16:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.415 16:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.415 16:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.684 16:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:01:YWE5MWU2NmU2NjdkNWI4Yjc1MDhlNDNmMDRjNzYwNzAFx80v: --dhchap-ctrl-secret DHHC-1:02:N2E5NjBiMTc0NGM5MzI0ZjM4MmNjNjM1OTMwNTkxZGRiYmJlYjY4MWFhODhiOWIzJerZQg==: 00:14:20.635 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.636 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:14:20.636 16:00:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.636 16:00:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.636 16:00:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.636 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:20.636 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:20.636 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:20.923 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:14:20.923 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:20.923 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:20.923 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:20.923 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:20.923 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.923 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.923 16:00:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.923 16:00:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.923 16:00:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.923 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.923 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:21.215 00:14:21.215 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:21.215 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:21.215 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.481 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.481 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.481 16:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.481 16:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.481 16:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.481 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:21.481 { 00:14:21.481 "auth": { 00:14:21.481 "dhgroup": "ffdhe4096", 00:14:21.481 "digest": "sha512", 00:14:21.481 "state": "completed" 00:14:21.481 }, 00:14:21.481 "cntlid": 125, 00:14:21.481 "listen_address": { 00:14:21.481 "adrfam": "IPv4", 00:14:21.481 "traddr": "10.0.0.2", 00:14:21.481 "trsvcid": "4420", 00:14:21.481 "trtype": "TCP" 00:14:21.481 }, 00:14:21.481 "peer_address": { 00:14:21.481 "adrfam": "IPv4", 00:14:21.481 "traddr": "10.0.0.1", 00:14:21.481 "trsvcid": "46108", 00:14:21.481 "trtype": "TCP" 00:14:21.481 }, 00:14:21.481 "qid": 0, 00:14:21.481 "state": "enabled", 00:14:21.481 "thread": "nvmf_tgt_poll_group_000" 00:14:21.481 } 00:14:21.481 ]' 00:14:21.481 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:21.481 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:21.481 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:21.481 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:21.481 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:21.481 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.481 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.481 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.093 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:02:NWFmZTVkMzVmYjRjNmZlNDljOTdiYTRjMWEzNjQxNTFiZTM1MDc3MTU0MGU1YmJlMUz6Eg==: --dhchap-ctrl-secret DHHC-1:01:MTVmNGNjMDAwN2ZlYWZkOTA2ZjdiMjAyNTA3NDQ0NGIYN1ne: 00:14:22.658 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.658 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:14:22.658 16:00:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.658 16:00:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.658 16:00:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.658 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:22.658 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:22.658 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:22.917 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:14:22.917 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:22.917 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:22.917 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:22.917 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:22.917 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.917 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key3 00:14:22.917 16:00:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.917 16:00:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.917 16:00:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.917 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:22.917 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:23.174 00:14:23.431 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:23.431 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.431 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:23.689 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.689 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.689 16:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.689 16:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.689 16:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.689 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:23.689 { 00:14:23.689 "auth": { 00:14:23.689 "dhgroup": "ffdhe4096", 00:14:23.689 "digest": "sha512", 00:14:23.689 "state": "completed" 00:14:23.689 }, 00:14:23.689 "cntlid": 127, 00:14:23.689 "listen_address": { 00:14:23.689 "adrfam": "IPv4", 00:14:23.689 "traddr": "10.0.0.2", 00:14:23.689 "trsvcid": "4420", 00:14:23.689 "trtype": "TCP" 00:14:23.689 }, 00:14:23.689 "peer_address": { 00:14:23.689 "adrfam": "IPv4", 00:14:23.689 "traddr": "10.0.0.1", 00:14:23.689 "trsvcid": "46124", 00:14:23.689 "trtype": "TCP" 00:14:23.689 }, 00:14:23.689 "qid": 0, 00:14:23.689 "state": "enabled", 00:14:23.689 "thread": "nvmf_tgt_poll_group_000" 00:14:23.689 } 00:14:23.689 ]' 00:14:23.689 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:23.689 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:23.689 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:23.689 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:23.689 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:23.689 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.689 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.689 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.256 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:03:M2ZjMzgxMmViNDllNWZhMzJiZTQwY2NiMGE0Y2JjMDdhMTczNTJiN2FiYmM4MjJhOWM2ZjU4ODFmNWVjNzY5Y40KTZo=: 00:14:24.822 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:24.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:24.822 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:14:24.822 16:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.822 16:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.822 16:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.822 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:24.822 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:24.822 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:24.822 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:25.080 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:14:25.080 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:25.080 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:25.080 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:25.080 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:25.080 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.080 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:25.080 16:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.080 16:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.080 16:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.080 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:25.080 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:25.708 00:14:25.708 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:25.708 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:25.708 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.966 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.966 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:25.966 16:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.966 16:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.966 16:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.966 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:25.966 { 00:14:25.966 "auth": { 00:14:25.966 "dhgroup": "ffdhe6144", 00:14:25.966 "digest": "sha512", 00:14:25.966 "state": "completed" 00:14:25.966 }, 00:14:25.966 "cntlid": 129, 00:14:25.966 "listen_address": { 00:14:25.966 "adrfam": "IPv4", 00:14:25.966 "traddr": "10.0.0.2", 00:14:25.966 "trsvcid": "4420", 00:14:25.966 "trtype": "TCP" 00:14:25.966 }, 00:14:25.966 "peer_address": { 00:14:25.966 "adrfam": "IPv4", 00:14:25.966 "traddr": "10.0.0.1", 00:14:25.966 "trsvcid": "32772", 00:14:25.966 "trtype": "TCP" 00:14:25.966 }, 00:14:25.966 "qid": 0, 00:14:25.966 "state": "enabled", 00:14:25.966 "thread": "nvmf_tgt_poll_group_000" 00:14:25.966 } 00:14:25.966 ]' 00:14:25.966 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:25.966 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:25.966 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:25.966 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:25.966 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:26.224 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.224 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.224 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.482 16:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:00:ZTU1OWIxZjUyYjhkZDJjNTgwZmMwMmZjOGQ1ZDJiMGQ0NzJkMDAxODE2MjA2Mjk0oRut5A==: --dhchap-ctrl-secret DHHC-1:03:MGY3Yjg3NzgyZjU4MjU3YThjMDJkZDIzZTA1MGM2YTNkZWNhMzQ1MGZhYzk2Y2Q3MjZmMGZjZWRjZmEzYTNhNhKHh4Y=: 00:14:27.046 16:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.046 16:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:14:27.046 16:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.046 16:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.046 16:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.046 16:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:27.046 16:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:27.046 16:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:27.304 16:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:14:27.304 16:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:27.304 16:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:27.304 16:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:27.304 16:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:27.304 16:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.304 16:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:27.304 16:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.304 16:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.304 16:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.304 16:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:27.304 16:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:27.870 00:14:27.870 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:27.870 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.870 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:28.127 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.127 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.127 16:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.127 16:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.127 16:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.127 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:28.127 { 00:14:28.127 "auth": { 00:14:28.127 "dhgroup": "ffdhe6144", 00:14:28.127 "digest": "sha512", 00:14:28.127 "state": "completed" 00:14:28.127 }, 00:14:28.127 "cntlid": 131, 00:14:28.127 "listen_address": { 00:14:28.127 "adrfam": "IPv4", 00:14:28.127 "traddr": "10.0.0.2", 00:14:28.127 "trsvcid": "4420", 00:14:28.127 "trtype": "TCP" 00:14:28.127 }, 00:14:28.127 "peer_address": { 00:14:28.127 "adrfam": "IPv4", 00:14:28.127 "traddr": "10.0.0.1", 00:14:28.127 "trsvcid": "32816", 00:14:28.127 "trtype": "TCP" 00:14:28.127 }, 00:14:28.127 "qid": 0, 00:14:28.127 "state": "enabled", 00:14:28.127 "thread": "nvmf_tgt_poll_group_000" 00:14:28.127 } 00:14:28.127 ]' 00:14:28.127 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:28.127 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:28.127 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:28.385 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:28.385 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:28.385 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.385 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.385 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:28.656 16:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:01:YWE5MWU2NmU2NjdkNWI4Yjc1MDhlNDNmMDRjNzYwNzAFx80v: --dhchap-ctrl-secret DHHC-1:02:N2E5NjBiMTc0NGM5MzI0ZjM4MmNjNjM1OTMwNTkxZGRiYmJlYjY4MWFhODhiOWIzJerZQg==: 00:14:29.231 16:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:29.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:29.231 16:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:14:29.231 16:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.231 16:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.231 16:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.231 16:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:29.231 16:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:29.231 16:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:29.803 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:14:29.803 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:29.803 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:29.803 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:29.803 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:29.803 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:29.803 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:29.803 16:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.803 16:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.803 16:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.803 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:29.803 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.061 00:14:30.061 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:30.061 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:30.061 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.319 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.319 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.319 16:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.319 16:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.319 16:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.319 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:30.319 { 00:14:30.319 "auth": { 00:14:30.319 "dhgroup": "ffdhe6144", 00:14:30.319 "digest": "sha512", 00:14:30.319 "state": "completed" 00:14:30.319 }, 00:14:30.319 "cntlid": 133, 00:14:30.319 "listen_address": { 00:14:30.319 "adrfam": "IPv4", 00:14:30.319 "traddr": "10.0.0.2", 00:14:30.319 "trsvcid": "4420", 00:14:30.319 "trtype": "TCP" 00:14:30.319 }, 00:14:30.319 "peer_address": { 00:14:30.319 "adrfam": "IPv4", 00:14:30.319 "traddr": "10.0.0.1", 00:14:30.319 "trsvcid": "32836", 00:14:30.319 "trtype": "TCP" 00:14:30.319 }, 00:14:30.319 "qid": 0, 00:14:30.319 "state": "enabled", 00:14:30.319 "thread": "nvmf_tgt_poll_group_000" 00:14:30.319 } 00:14:30.319 ]' 00:14:30.319 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:30.578 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:30.578 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:30.578 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:30.578 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:30.578 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.578 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.578 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:30.835 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:02:NWFmZTVkMzVmYjRjNmZlNDljOTdiYTRjMWEzNjQxNTFiZTM1MDc3MTU0MGU1YmJlMUz6Eg==: --dhchap-ctrl-secret DHHC-1:01:MTVmNGNjMDAwN2ZlYWZkOTA2ZjdiMjAyNTA3NDQ0NGIYN1ne: 00:14:31.768 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.768 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:14:31.768 16:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.768 16:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.768 16:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.768 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:31.768 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:31.768 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:32.026 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:14:32.026 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:32.026 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:32.026 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:32.026 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:32.026 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.026 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key3 00:14:32.026 16:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.026 16:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.026 16:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.026 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:32.026 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:32.597 00:14:32.597 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:32.597 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.597 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:32.855 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.855 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.855 16:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.855 16:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.855 16:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.855 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:32.855 { 00:14:32.855 "auth": { 00:14:32.855 "dhgroup": "ffdhe6144", 00:14:32.855 "digest": "sha512", 00:14:32.855 "state": "completed" 00:14:32.855 }, 00:14:32.855 "cntlid": 135, 00:14:32.855 "listen_address": { 00:14:32.855 "adrfam": "IPv4", 00:14:32.855 "traddr": "10.0.0.2", 00:14:32.855 "trsvcid": "4420", 00:14:32.855 "trtype": "TCP" 00:14:32.855 }, 00:14:32.855 "peer_address": { 00:14:32.855 "adrfam": "IPv4", 00:14:32.855 "traddr": "10.0.0.1", 00:14:32.855 "trsvcid": "32872", 00:14:32.855 "trtype": "TCP" 00:14:32.855 }, 00:14:32.855 "qid": 0, 00:14:32.855 "state": "enabled", 00:14:32.855 "thread": "nvmf_tgt_poll_group_000" 00:14:32.855 } 00:14:32.855 ]' 00:14:32.855 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:32.855 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:32.855 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:32.855 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:32.855 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:32.855 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.855 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.855 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.113 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:03:M2ZjMzgxMmViNDllNWZhMzJiZTQwY2NiMGE0Y2JjMDdhMTczNTJiN2FiYmM4MjJhOWM2ZjU4ODFmNWVjNzY5Y40KTZo=: 00:14:34.055 16:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.056 16:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:14:34.056 16:00:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.056 16:00:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.056 16:00:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.056 16:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:34.056 16:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:34.056 16:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:34.056 16:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:34.056 16:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:14:34.056 16:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:34.056 16:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:34.056 16:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:34.056 16:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:34.056 16:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.056 16:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:34.056 16:00:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.056 16:00:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.056 16:00:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.056 16:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:34.056 16:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:34.990 00:14:34.990 16:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:34.990 16:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:34.990 16:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.990 16:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.990 16:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.990 16:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.990 16:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.990 16:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.990 16:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:34.990 { 00:14:34.990 "auth": { 00:14:34.990 "dhgroup": "ffdhe8192", 00:14:34.990 "digest": "sha512", 00:14:34.990 "state": "completed" 00:14:34.990 }, 00:14:34.990 "cntlid": 137, 00:14:34.990 "listen_address": { 00:14:34.990 "adrfam": "IPv4", 00:14:34.990 "traddr": "10.0.0.2", 00:14:34.990 "trsvcid": "4420", 00:14:34.990 "trtype": "TCP" 00:14:34.990 }, 00:14:34.990 "peer_address": { 00:14:34.990 "adrfam": "IPv4", 00:14:34.990 "traddr": "10.0.0.1", 00:14:34.990 "trsvcid": "32900", 00:14:34.990 "trtype": "TCP" 00:14:34.990 }, 00:14:34.990 "qid": 0, 00:14:34.990 "state": "enabled", 00:14:34.990 "thread": "nvmf_tgt_poll_group_000" 00:14:34.990 } 00:14:34.990 ]' 00:14:34.990 16:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:35.248 16:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:35.248 16:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:35.248 16:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:35.248 16:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:35.248 16:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.248 16:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.248 16:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.506 16:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:00:ZTU1OWIxZjUyYjhkZDJjNTgwZmMwMmZjOGQ1ZDJiMGQ0NzJkMDAxODE2MjA2Mjk0oRut5A==: --dhchap-ctrl-secret DHHC-1:03:MGY3Yjg3NzgyZjU4MjU3YThjMDJkZDIzZTA1MGM2YTNkZWNhMzQ1MGZhYzk2Y2Q3MjZmMGZjZWRjZmEzYTNhNhKHh4Y=: 00:14:36.071 16:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.071 16:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:14:36.071 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.071 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.071 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.071 16:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:36.071 16:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:36.071 16:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:36.647 16:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:14:36.647 16:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:36.647 16:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:36.647 16:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:36.647 16:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:36.647 16:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.647 16:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.647 16:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.647 16:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.647 16:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.647 16:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.647 16:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.228 00:14:37.228 16:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:37.228 16:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.228 16:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:37.485 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.485 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.485 16:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.485 16:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.485 16:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.485 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:37.485 { 00:14:37.485 "auth": { 00:14:37.485 "dhgroup": "ffdhe8192", 00:14:37.485 "digest": "sha512", 00:14:37.485 "state": "completed" 00:14:37.485 }, 00:14:37.485 "cntlid": 139, 00:14:37.485 "listen_address": { 00:14:37.485 "adrfam": "IPv4", 00:14:37.485 "traddr": "10.0.0.2", 00:14:37.485 "trsvcid": "4420", 00:14:37.485 "trtype": "TCP" 00:14:37.485 }, 00:14:37.485 "peer_address": { 00:14:37.485 "adrfam": "IPv4", 00:14:37.485 "traddr": "10.0.0.1", 00:14:37.485 "trsvcid": "35304", 00:14:37.485 "trtype": "TCP" 00:14:37.485 }, 00:14:37.485 "qid": 0, 00:14:37.485 "state": "enabled", 00:14:37.485 "thread": "nvmf_tgt_poll_group_000" 00:14:37.485 } 00:14:37.485 ]' 00:14:37.485 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:37.485 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:37.485 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:37.485 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:37.485 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:37.485 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.485 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.485 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.061 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:01:YWE5MWU2NmU2NjdkNWI4Yjc1MDhlNDNmMDRjNzYwNzAFx80v: --dhchap-ctrl-secret DHHC-1:02:N2E5NjBiMTc0NGM5MzI0ZjM4MmNjNjM1OTMwNTkxZGRiYmJlYjY4MWFhODhiOWIzJerZQg==: 00:14:38.656 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.656 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:14:38.656 16:00:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.656 16:00:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.656 16:00:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.656 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:38.656 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:38.656 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:38.918 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:14:38.918 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:38.918 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:38.918 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:38.918 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:38.918 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.918 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.918 16:00:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.918 16:00:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.918 16:00:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.918 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.918 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.482 00:14:39.482 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:39.482 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:39.482 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.753 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.753 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.753 16:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.753 16:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.753 16:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.753 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:39.753 { 00:14:39.753 "auth": { 00:14:39.753 "dhgroup": "ffdhe8192", 00:14:39.753 "digest": "sha512", 00:14:39.753 "state": "completed" 00:14:39.753 }, 00:14:39.753 "cntlid": 141, 00:14:39.753 "listen_address": { 00:14:39.753 "adrfam": "IPv4", 00:14:39.753 "traddr": "10.0.0.2", 00:14:39.753 "trsvcid": "4420", 00:14:39.753 "trtype": "TCP" 00:14:39.753 }, 00:14:39.753 "peer_address": { 00:14:39.753 "adrfam": "IPv4", 00:14:39.753 "traddr": "10.0.0.1", 00:14:39.753 "trsvcid": "35344", 00:14:39.753 "trtype": "TCP" 00:14:39.753 }, 00:14:39.753 "qid": 0, 00:14:39.753 "state": "enabled", 00:14:39.753 "thread": "nvmf_tgt_poll_group_000" 00:14:39.753 } 00:14:39.753 ]' 00:14:39.753 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:40.012 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:40.012 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:40.012 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:40.012 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:40.012 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.012 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.012 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.270 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:02:NWFmZTVkMzVmYjRjNmZlNDljOTdiYTRjMWEzNjQxNTFiZTM1MDc3MTU0MGU1YmJlMUz6Eg==: --dhchap-ctrl-secret DHHC-1:01:MTVmNGNjMDAwN2ZlYWZkOTA2ZjdiMjAyNTA3NDQ0NGIYN1ne: 00:14:41.212 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.212 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:14:41.212 16:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.212 16:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.212 16:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.212 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:41.212 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:41.212 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:41.470 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:14:41.470 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:41.470 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:41.470 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:41.470 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:41.470 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.470 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key3 00:14:41.470 16:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.470 16:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.470 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.470 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:41.470 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:42.095 00:14:42.095 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:42.095 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.095 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:42.353 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.353 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.353 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.353 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.353 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.353 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:42.353 { 00:14:42.353 "auth": { 00:14:42.353 "dhgroup": "ffdhe8192", 00:14:42.353 "digest": "sha512", 00:14:42.353 "state": "completed" 00:14:42.353 }, 00:14:42.353 "cntlid": 143, 00:14:42.353 "listen_address": { 00:14:42.353 "adrfam": "IPv4", 00:14:42.353 "traddr": "10.0.0.2", 00:14:42.353 "trsvcid": "4420", 00:14:42.353 "trtype": "TCP" 00:14:42.353 }, 00:14:42.353 "peer_address": { 00:14:42.353 "adrfam": "IPv4", 00:14:42.353 "traddr": "10.0.0.1", 00:14:42.353 "trsvcid": "35372", 00:14:42.353 "trtype": "TCP" 00:14:42.353 }, 00:14:42.353 "qid": 0, 00:14:42.353 "state": "enabled", 00:14:42.353 "thread": "nvmf_tgt_poll_group_000" 00:14:42.353 } 00:14:42.353 ]' 00:14:42.353 16:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:42.353 16:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:42.353 16:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:42.631 16:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:42.631 16:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:42.631 16:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.631 16:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.631 16:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.895 16:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:03:M2ZjMzgxMmViNDllNWZhMzJiZTQwY2NiMGE0Y2JjMDdhMTczNTJiN2FiYmM4MjJhOWM2ZjU4ODFmNWVjNzY5Y40KTZo=: 00:14:43.461 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.461 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:14:43.461 16:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.461 16:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.461 16:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.461 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:14:43.461 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:14:43.461 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:14:43.461 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:43.461 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:43.461 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:43.720 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:14:43.720 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:43.720 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:43.720 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:43.720 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:43.720 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.720 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.720 16:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.720 16:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.720 16:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.720 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.720 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:44.653 00:14:44.653 16:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:44.653 16:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.653 16:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:44.653 16:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.653 16:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.653 16:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.653 16:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.911 16:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.911 16:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:44.911 { 00:14:44.911 "auth": { 00:14:44.911 "dhgroup": "ffdhe8192", 00:14:44.911 "digest": "sha512", 00:14:44.911 "state": "completed" 00:14:44.911 }, 00:14:44.911 "cntlid": 145, 00:14:44.911 "listen_address": { 00:14:44.911 "adrfam": "IPv4", 00:14:44.911 "traddr": "10.0.0.2", 00:14:44.911 "trsvcid": "4420", 00:14:44.911 "trtype": "TCP" 00:14:44.911 }, 00:14:44.911 "peer_address": { 00:14:44.911 "adrfam": "IPv4", 00:14:44.911 "traddr": "10.0.0.1", 00:14:44.911 "trsvcid": "35390", 00:14:44.911 "trtype": "TCP" 00:14:44.911 }, 00:14:44.911 "qid": 0, 00:14:44.911 "state": "enabled", 00:14:44.911 "thread": "nvmf_tgt_poll_group_000" 00:14:44.911 } 00:14:44.911 ]' 00:14:44.911 16:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:44.911 16:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:44.911 16:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:44.911 16:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:44.911 16:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:44.911 16:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.911 16:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.911 16:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.169 16:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:00:ZTU1OWIxZjUyYjhkZDJjNTgwZmMwMmZjOGQ1ZDJiMGQ0NzJkMDAxODE2MjA2Mjk0oRut5A==: --dhchap-ctrl-secret DHHC-1:03:MGY3Yjg3NzgyZjU4MjU3YThjMDJkZDIzZTA1MGM2YTNkZWNhMzQ1MGZhYzk2Y2Q3MjZmMGZjZWRjZmEzYTNhNhKHh4Y=: 00:14:46.113 16:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.113 16:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:14:46.113 16:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.113 16:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.113 16:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.113 16:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key1 00:14:46.113 16:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.113 16:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.113 16:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.113 16:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:46.113 16:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:46.113 16:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:46.113 16:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:46.113 16:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:46.113 16:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:46.113 16:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:46.113 16:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:46.113 16:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:46.679 2024/07/15 16:00:40 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:46.679 request: 00:14:46.679 { 00:14:46.679 "method": "bdev_nvme_attach_controller", 00:14:46.679 "params": { 00:14:46.679 "name": "nvme0", 00:14:46.679 "trtype": "tcp", 00:14:46.679 "traddr": "10.0.0.2", 00:14:46.679 "adrfam": "ipv4", 00:14:46.679 "trsvcid": "4420", 00:14:46.679 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:46.679 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d", 00:14:46.679 "prchk_reftag": false, 00:14:46.679 "prchk_guard": false, 00:14:46.679 "hdgst": false, 00:14:46.679 "ddgst": false, 00:14:46.679 "dhchap_key": "key2" 00:14:46.679 } 00:14:46.679 } 00:14:46.679 Got JSON-RPC error response 00:14:46.679 GoRPCClient: error on JSON-RPC call 00:14:46.679 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:46.679 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:46.679 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:46.679 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:46.679 16:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:14:46.679 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.679 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.679 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.679 16:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.679 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.679 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.679 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.679 16:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:46.679 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:46.679 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:46.679 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:46.679 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:46.679 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:46.679 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:46.679 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:46.679 16:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:47.245 2024/07/15 16:00:40 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:47.245 request: 00:14:47.245 { 00:14:47.245 "method": "bdev_nvme_attach_controller", 00:14:47.245 "params": { 00:14:47.245 "name": "nvme0", 00:14:47.245 "trtype": "tcp", 00:14:47.246 "traddr": "10.0.0.2", 00:14:47.246 "adrfam": "ipv4", 00:14:47.246 "trsvcid": "4420", 00:14:47.246 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:47.246 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d", 00:14:47.246 "prchk_reftag": false, 00:14:47.246 "prchk_guard": false, 00:14:47.246 "hdgst": false, 00:14:47.246 "ddgst": false, 00:14:47.246 "dhchap_key": "key1", 00:14:47.246 "dhchap_ctrlr_key": "ckey2" 00:14:47.246 } 00:14:47.246 } 00:14:47.246 Got JSON-RPC error response 00:14:47.246 GoRPCClient: error on JSON-RPC call 00:14:47.246 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:47.246 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:47.246 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:47.246 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:47.246 16:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:14:47.246 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.246 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.246 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.246 16:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key1 00:14:47.246 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.246 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.246 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.246 16:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.246 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:47.246 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.246 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:47.246 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:47.246 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:47.246 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:47.246 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.246 16:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.813 2024/07/15 16:00:41 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:47.813 request: 00:14:47.813 { 00:14:47.813 "method": "bdev_nvme_attach_controller", 00:14:47.813 "params": { 00:14:47.813 "name": "nvme0", 00:14:47.813 "trtype": "tcp", 00:14:47.813 "traddr": "10.0.0.2", 00:14:47.813 "adrfam": "ipv4", 00:14:47.813 "trsvcid": "4420", 00:14:47.813 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:47.813 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d", 00:14:47.813 "prchk_reftag": false, 00:14:47.813 "prchk_guard": false, 00:14:47.813 "hdgst": false, 00:14:47.813 "ddgst": false, 00:14:47.813 "dhchap_key": "key1", 00:14:47.813 "dhchap_ctrlr_key": "ckey1" 00:14:47.813 } 00:14:47.813 } 00:14:47.813 Got JSON-RPC error response 00:14:47.813 GoRPCClient: error on JSON-RPC call 00:14:47.813 16:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:47.813 16:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:47.813 16:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:47.813 16:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:47.813 16:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:14:47.813 16:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.813 16:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.813 16:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.813 16:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 78264 00:14:47.813 16:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 78264 ']' 00:14:47.813 16:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 78264 00:14:47.813 16:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:14:47.813 16:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:47.813 16:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78264 00:14:47.813 16:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:47.813 16:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:47.813 16:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78264' 00:14:47.813 killing process with pid 78264 00:14:47.813 16:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 78264 00:14:47.813 16:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 78264 00:14:48.072 16:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:14:48.072 16:00:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:48.072 16:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:48.072 16:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.072 16:00:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:14:48.072 16:00:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=83193 00:14:48.072 16:00:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 83193 00:14:48.072 16:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 83193 ']' 00:14:48.072 16:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.072 16:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:48.072 16:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.072 16:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:48.072 16:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.446 16:00:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:49.446 16:00:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:49.447 16:00:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:49.447 16:00:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:49.447 16:00:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.447 16:00:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:49.447 16:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:49.447 16:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 83193 00:14:49.447 16:00:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 83193 ']' 00:14:49.447 16:00:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.447 16:00:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:49.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.447 16:00:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.447 16:00:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:49.447 16:00:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.447 16:00:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:49.447 16:00:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:49.447 16:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:14:49.447 16:00:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.447 16:00:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.713 16:00:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.713 16:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:14:49.713 16:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:49.713 16:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:49.713 16:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:49.713 16:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:49.713 16:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.713 16:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key3 00:14:49.713 16:00:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.713 16:00:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.713 16:00:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.713 16:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:49.713 16:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:50.283 00:14:50.283 16:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:50.283 16:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.283 16:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:50.554 16:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.554 16:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.554 16:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.554 16:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.554 16:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.554 16:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:50.554 { 00:14:50.554 "auth": { 00:14:50.554 "dhgroup": "ffdhe8192", 00:14:50.554 "digest": "sha512", 00:14:50.554 "state": "completed" 00:14:50.554 }, 00:14:50.554 "cntlid": 1, 00:14:50.554 "listen_address": { 00:14:50.554 "adrfam": "IPv4", 00:14:50.554 "traddr": "10.0.0.2", 00:14:50.554 "trsvcid": "4420", 00:14:50.554 "trtype": "TCP" 00:14:50.554 }, 00:14:50.554 "peer_address": { 00:14:50.554 "adrfam": "IPv4", 00:14:50.554 "traddr": "10.0.0.1", 00:14:50.554 "trsvcid": "60066", 00:14:50.554 "trtype": "TCP" 00:14:50.554 }, 00:14:50.554 "qid": 0, 00:14:50.554 "state": "enabled", 00:14:50.554 "thread": "nvmf_tgt_poll_group_000" 00:14:50.554 } 00:14:50.554 ]' 00:14:50.554 16:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:50.554 16:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:50.554 16:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:50.554 16:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:50.554 16:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:50.811 16:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.811 16:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.811 16:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.069 16:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-secret DHHC-1:03:M2ZjMzgxMmViNDllNWZhMzJiZTQwY2NiMGE0Y2JjMDdhMTczNTJiN2FiYmM4MjJhOWM2ZjU4ODFmNWVjNzY5Y40KTZo=: 00:14:51.636 16:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.636 16:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:14:51.636 16:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.636 16:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.636 16:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.636 16:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --dhchap-key key3 00:14:51.636 16:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.636 16:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.636 16:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.636 16:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:14:51.636 16:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:14:51.908 16:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:51.908 16:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:51.908 16:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:51.908 16:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:51.908 16:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:51.908 16:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:51.908 16:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:51.908 16:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:51.908 16:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:52.182 2024/07/15 16:00:45 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:52.182 request: 00:14:52.182 { 00:14:52.182 "method": "bdev_nvme_attach_controller", 00:14:52.182 "params": { 00:14:52.182 "name": "nvme0", 00:14:52.182 "trtype": "tcp", 00:14:52.182 "traddr": "10.0.0.2", 00:14:52.182 "adrfam": "ipv4", 00:14:52.182 "trsvcid": "4420", 00:14:52.182 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:52.182 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d", 00:14:52.182 "prchk_reftag": false, 00:14:52.182 "prchk_guard": false, 00:14:52.182 "hdgst": false, 00:14:52.182 "ddgst": false, 00:14:52.182 "dhchap_key": "key3" 00:14:52.182 } 00:14:52.182 } 00:14:52.182 Got JSON-RPC error response 00:14:52.182 GoRPCClient: error on JSON-RPC call 00:14:52.183 16:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:52.183 16:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:52.183 16:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:52.183 16:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:52.183 16:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:14:52.183 16:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:14:52.183 16:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:14:52.183 16:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:14:52.440 16:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:52.440 16:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:52.440 16:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:52.440 16:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:52.440 16:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:52.440 16:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:52.440 16:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:52.440 16:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:52.440 16:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:52.698 2024/07/15 16:00:46 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:52.698 request: 00:14:52.698 { 00:14:52.698 "method": "bdev_nvme_attach_controller", 00:14:52.698 "params": { 00:14:52.698 "name": "nvme0", 00:14:52.698 "trtype": "tcp", 00:14:52.698 "traddr": "10.0.0.2", 00:14:52.698 "adrfam": "ipv4", 00:14:52.698 "trsvcid": "4420", 00:14:52.698 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:52.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d", 00:14:52.698 "prchk_reftag": false, 00:14:52.698 "prchk_guard": false, 00:14:52.698 "hdgst": false, 00:14:52.698 "ddgst": false, 00:14:52.698 "dhchap_key": "key3" 00:14:52.698 } 00:14:52.698 } 00:14:52.698 Got JSON-RPC error response 00:14:52.698 GoRPCClient: error on JSON-RPC call 00:14:52.956 16:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:52.956 16:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:52.956 16:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:52.956 16:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:52.956 16:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:14:52.956 16:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:14:52.956 16:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:14:52.956 16:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:52.956 16:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:52.956 16:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:53.215 16:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:14:53.215 16:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.215 16:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.215 16:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.215 16:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:14:53.215 16:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.215 16:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.215 16:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.215 16:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:53.215 16:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:53.215 16:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:53.215 16:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:53.215 16:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:53.215 16:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:53.215 16:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:53.215 16:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:53.215 16:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:53.473 2024/07/15 16:00:47 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:53.473 request: 00:14:53.473 { 00:14:53.473 "method": "bdev_nvme_attach_controller", 00:14:53.473 "params": { 00:14:53.473 "name": "nvme0", 00:14:53.473 "trtype": "tcp", 00:14:53.473 "traddr": "10.0.0.2", 00:14:53.473 "adrfam": "ipv4", 00:14:53.473 "trsvcid": "4420", 00:14:53.473 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:53.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d", 00:14:53.473 "prchk_reftag": false, 00:14:53.473 "prchk_guard": false, 00:14:53.473 "hdgst": false, 00:14:53.473 "ddgst": false, 00:14:53.473 "dhchap_key": "key0", 00:14:53.473 "dhchap_ctrlr_key": "key1" 00:14:53.473 } 00:14:53.473 } 00:14:53.473 Got JSON-RPC error response 00:14:53.473 GoRPCClient: error on JSON-RPC call 00:14:53.473 16:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:53.473 16:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:53.473 16:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:53.473 16:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:53.473 16:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:53.473 16:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:53.732 00:14:53.732 16:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:14:53.732 16:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:14:53.732 16:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.990 16:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.990 16:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.990 16:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.249 16:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:14:54.249 16:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:14:54.249 16:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 78308 00:14:54.249 16:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 78308 ']' 00:14:54.249 16:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 78308 00:14:54.249 16:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:14:54.249 16:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:54.249 16:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78308 00:14:54.249 16:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:54.249 16:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:54.249 16:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78308' 00:14:54.249 killing process with pid 78308 00:14:54.249 16:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 78308 00:14:54.249 16:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 78308 00:14:54.814 16:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:14:54.814 16:00:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:54.814 16:00:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:14:54.814 16:00:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:54.814 16:00:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:14:54.814 16:00:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:54.814 16:00:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:54.814 rmmod nvme_tcp 00:14:54.814 rmmod nvme_fabrics 00:14:54.814 rmmod nvme_keyring 00:14:54.814 16:00:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:54.814 16:00:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:14:54.814 16:00:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:14:54.814 16:00:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 83193 ']' 00:14:54.814 16:00:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 83193 00:14:54.814 16:00:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 83193 ']' 00:14:54.814 16:00:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 83193 00:14:54.814 16:00:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:14:54.814 16:00:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:54.814 16:00:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83193 00:14:54.814 16:00:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:54.814 killing process with pid 83193 00:14:54.814 16:00:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:54.814 16:00:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83193' 00:14:54.814 16:00:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 83193 00:14:54.814 16:00:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 83193 00:14:55.071 16:00:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:55.071 16:00:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:55.071 16:00:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:55.071 16:00:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:55.071 16:00:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:55.071 16:00:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.071 16:00:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:55.071 16:00:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.071 16:00:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:55.071 16:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Uz0 /tmp/spdk.key-sha256.7HD /tmp/spdk.key-sha384.ybI /tmp/spdk.key-sha512.cmT /tmp/spdk.key-sha512.Uv5 /tmp/spdk.key-sha384.ZqD /tmp/spdk.key-sha256.xbS '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:14:55.071 ************************************ 00:14:55.071 END TEST nvmf_auth_target 00:14:55.071 ************************************ 00:14:55.071 00:14:55.071 real 2m56.635s 00:14:55.071 user 7m10.135s 00:14:55.071 sys 0m22.889s 00:14:55.071 16:00:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:55.071 16:00:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.071 16:00:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:55.071 16:00:48 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:14:55.071 16:00:48 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:55.071 16:00:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:55.071 16:00:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:55.071 16:00:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:55.071 ************************************ 00:14:55.071 START TEST nvmf_bdevio_no_huge 00:14:55.071 ************************************ 00:14:55.071 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:55.329 * Looking for test storage... 00:14:55.329 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:55.329 Cannot find device "nvmf_tgt_br" 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:55.329 Cannot find device "nvmf_tgt_br2" 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:55.329 Cannot find device "nvmf_tgt_br" 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:55.329 Cannot find device "nvmf_tgt_br2" 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:55.329 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:55.329 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:55.329 16:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:55.329 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:55.329 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:55.329 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:55.329 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:55.329 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:55.329 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:55.329 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:55.329 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:55.587 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:55.587 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:55.587 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:55.587 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:55.587 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:55.587 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:55.587 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:55.587 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:55.587 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:55.587 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:55.587 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:55.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:55.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:14:55.587 00:14:55.587 --- 10.0.0.2 ping statistics --- 00:14:55.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.587 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:14:55.587 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:55.587 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:55.587 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:14:55.587 00:14:55.587 --- 10.0.0.3 ping statistics --- 00:14:55.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.587 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:14:55.587 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:55.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:55.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:14:55.587 00:14:55.587 --- 10.0.0.1 ping statistics --- 00:14:55.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.587 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:14:55.587 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:55.587 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:14:55.587 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:55.587 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:55.587 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:55.587 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:55.587 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:55.587 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:55.587 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:55.587 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:55.587 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:55.587 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:55.587 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:55.587 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=83606 00:14:55.587 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:14:55.587 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 83606 00:14:55.587 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 83606 ']' 00:14:55.587 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.587 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:55.587 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.587 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:55.587 16:00:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:55.587 [2024-07-15 16:00:49.250892] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:14:55.587 [2024-07-15 16:00:49.251026] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:14:55.845 [2024-07-15 16:00:49.400262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:55.845 [2024-07-15 16:00:49.522489] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:55.845 [2024-07-15 16:00:49.522553] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:55.845 [2024-07-15 16:00:49.522565] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:55.845 [2024-07-15 16:00:49.522574] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:55.845 [2024-07-15 16:00:49.522582] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:55.845 [2024-07-15 16:00:49.523407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:55.845 [2024-07-15 16:00:49.523505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:55.845 [2024-07-15 16:00:49.523630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:55.845 [2024-07-15 16:00:49.523643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:56.780 16:00:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:56.780 16:00:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:14:56.780 16:00:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:56.780 16:00:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:56.780 16:00:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:56.780 16:00:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:56.780 16:00:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:56.780 16:00:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.780 16:00:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:56.780 [2024-07-15 16:00:50.329597] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:56.780 16:00:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.780 16:00:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:56.780 16:00:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.780 16:00:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:56.780 Malloc0 00:14:56.780 16:00:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.780 16:00:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:56.780 16:00:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.780 16:00:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:56.780 16:00:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.780 16:00:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:56.780 16:00:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.780 16:00:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:56.780 16:00:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.780 16:00:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:56.780 16:00:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.780 16:00:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:56.780 [2024-07-15 16:00:50.377757] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:56.780 16:00:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.780 16:00:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:14:56.781 16:00:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:56.781 16:00:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:14:56.781 16:00:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:14:56.781 16:00:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:56.781 16:00:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:56.781 { 00:14:56.781 "params": { 00:14:56.781 "name": "Nvme$subsystem", 00:14:56.781 "trtype": "$TEST_TRANSPORT", 00:14:56.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:56.781 "adrfam": "ipv4", 00:14:56.781 "trsvcid": "$NVMF_PORT", 00:14:56.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:56.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:56.781 "hdgst": ${hdgst:-false}, 00:14:56.781 "ddgst": ${ddgst:-false} 00:14:56.781 }, 00:14:56.781 "method": "bdev_nvme_attach_controller" 00:14:56.781 } 00:14:56.781 EOF 00:14:56.781 )") 00:14:56.781 16:00:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:14:56.781 16:00:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:14:56.781 16:00:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:14:56.781 16:00:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:56.781 "params": { 00:14:56.781 "name": "Nvme1", 00:14:56.781 "trtype": "tcp", 00:14:56.781 "traddr": "10.0.0.2", 00:14:56.781 "adrfam": "ipv4", 00:14:56.781 "trsvcid": "4420", 00:14:56.781 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:56.781 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:56.781 "hdgst": false, 00:14:56.781 "ddgst": false 00:14:56.781 }, 00:14:56.781 "method": "bdev_nvme_attach_controller" 00:14:56.781 }' 00:14:56.781 [2024-07-15 16:00:50.440364] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:14:56.781 [2024-07-15 16:00:50.440479] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid83660 ] 00:14:57.039 [2024-07-15 16:00:50.592384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:57.300 [2024-07-15 16:00:50.775020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.300 [2024-07-15 16:00:50.775140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:57.300 [2024-07-15 16:00:50.775405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.300 I/O targets: 00:14:57.300 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:57.300 00:14:57.300 00:14:57.300 CUnit - A unit testing framework for C - Version 2.1-3 00:14:57.300 http://cunit.sourceforge.net/ 00:14:57.300 00:14:57.300 00:14:57.300 Suite: bdevio tests on: Nvme1n1 00:14:57.300 Test: blockdev write read block ...passed 00:14:57.565 Test: blockdev write zeroes read block ...passed 00:14:57.565 Test: blockdev write zeroes read no split ...passed 00:14:57.566 Test: blockdev write zeroes read split ...passed 00:14:57.566 Test: blockdev write zeroes read split partial ...passed 00:14:57.566 Test: blockdev reset ...[2024-07-15 16:00:51.088753] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:57.566 [2024-07-15 16:00:51.088885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf91460 (9): Bad file descriptor 00:14:57.566 [2024-07-15 16:00:51.103676] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:57.566 passed 00:14:57.566 Test: blockdev write read 8 blocks ...passed 00:14:57.566 Test: blockdev write read size > 128k ...passed 00:14:57.566 Test: blockdev write read invalid size ...passed 00:14:57.566 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:57.566 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:57.566 Test: blockdev write read max offset ...passed 00:14:57.566 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:57.566 Test: blockdev writev readv 8 blocks ...passed 00:14:57.566 Test: blockdev writev readv 30 x 1block ...passed 00:14:57.566 Test: blockdev writev readv block ...passed 00:14:57.566 Test: blockdev writev readv size > 128k ...passed 00:14:57.566 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:57.566 Test: blockdev comparev and writev ...[2024-07-15 16:00:51.278085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:57.566 [2024-07-15 16:00:51.278147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:57.566 [2024-07-15 16:00:51.278168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:57.566 [2024-07-15 16:00:51.278179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:57.566 [2024-07-15 16:00:51.278782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:57.566 [2024-07-15 16:00:51.278812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:57.566 [2024-07-15 16:00:51.278831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:57.566 [2024-07-15 16:00:51.278842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:57.566 [2024-07-15 16:00:51.279351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:57.566 [2024-07-15 16:00:51.279385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:57.566 [2024-07-15 16:00:51.279404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:57.566 [2024-07-15 16:00:51.279416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:57.566 [2024-07-15 16:00:51.279844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:57.566 [2024-07-15 16:00:51.279873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:57.566 [2024-07-15 16:00:51.279891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:57.566 [2024-07-15 16:00:51.279902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:57.823 passed 00:14:57.823 Test: blockdev nvme passthru rw ...passed 00:14:57.823 Test: blockdev nvme passthru vendor specific ...[2024-07-15 16:00:51.364456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:57.823 [2024-07-15 16:00:51.364846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:57.823 [2024-07-15 16:00:51.365242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:57.823 [2024-07-15 16:00:51.365273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:57.823 [2024-07-15 16:00:51.365538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:57.823 [2024-07-15 16:00:51.365619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:57.823 [2024-07-15 16:00:51.365875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:57.823 [2024-07-15 16:00:51.365904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:57.823 passed 00:14:57.823 Test: blockdev nvme admin passthru ...passed 00:14:57.823 Test: blockdev copy ...passed 00:14:57.823 00:14:57.823 Run Summary: Type Total Ran Passed Failed Inactive 00:14:57.823 suites 1 1 n/a 0 0 00:14:57.823 tests 23 23 23 0 0 00:14:57.823 asserts 152 152 152 0 n/a 00:14:57.823 00:14:57.823 Elapsed time = 0.930 seconds 00:14:58.389 16:00:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:58.389 16:00:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.389 16:00:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:58.389 16:00:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.389 16:00:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:58.389 16:00:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:14:58.389 16:00:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:58.390 16:00:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:14:58.390 16:00:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:58.390 16:00:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:14:58.390 16:00:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:58.390 16:00:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:58.390 rmmod nvme_tcp 00:14:58.390 rmmod nvme_fabrics 00:14:58.390 rmmod nvme_keyring 00:14:58.390 16:00:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:58.390 16:00:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:14:58.390 16:00:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:14:58.390 16:00:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 83606 ']' 00:14:58.390 16:00:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 83606 00:14:58.390 16:00:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 83606 ']' 00:14:58.390 16:00:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 83606 00:14:58.390 16:00:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:14:58.390 16:00:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:58.390 16:00:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83606 00:14:58.390 16:00:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:14:58.390 killing process with pid 83606 00:14:58.390 16:00:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:14:58.390 16:00:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83606' 00:14:58.390 16:00:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 83606 00:14:58.390 16:00:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 83606 00:14:58.955 16:00:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:58.955 16:00:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:58.955 16:00:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:58.955 16:00:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:58.955 16:00:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:58.955 16:00:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.955 16:00:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:58.955 16:00:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.955 16:00:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:58.955 00:14:58.955 real 0m3.686s 00:14:58.955 user 0m13.540s 00:14:58.955 sys 0m1.398s 00:14:58.955 16:00:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:58.955 16:00:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:58.955 ************************************ 00:14:58.955 END TEST nvmf_bdevio_no_huge 00:14:58.955 ************************************ 00:14:58.955 16:00:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:58.955 16:00:52 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:58.955 16:00:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:58.955 16:00:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:58.955 16:00:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:58.955 ************************************ 00:14:58.955 START TEST nvmf_tls 00:14:58.955 ************************************ 00:14:58.955 16:00:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:58.955 * Looking for test storage... 00:14:58.955 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:58.955 16:00:52 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:58.955 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:14:58.955 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:58.955 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:58.955 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:58.955 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:58.955 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:58.955 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:58.955 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:58.955 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:58.955 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:58.955 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:58.955 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:14:58.955 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:14:58.955 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:58.955 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:58.955 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:58.955 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:58.955 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:58.955 16:00:52 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:58.955 16:00:52 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:58.955 16:00:52 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:58.955 16:00:52 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.955 16:00:52 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.955 16:00:52 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.955 16:00:52 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:14:58.955 16:00:52 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.955 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:14:58.955 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:58.955 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:58.955 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:58.955 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:58.955 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:58.955 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:58.955 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:58.955 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:58.955 16:00:52 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:58.955 16:00:52 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:14:58.956 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:58.956 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:58.956 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:58.956 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:58.956 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:58.956 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.956 16:00:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:58.956 16:00:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.956 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:58.956 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:58.956 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:58.956 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:58.956 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:58.956 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:58.956 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:58.956 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:58.956 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:58.956 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:58.956 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:58.956 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:58.956 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:58.956 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:58.956 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:58.956 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:58.956 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:58.956 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:58.956 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:58.956 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:58.956 Cannot find device "nvmf_tgt_br" 00:14:58.956 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:14:58.956 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:58.956 Cannot find device "nvmf_tgt_br2" 00:14:58.956 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:14:58.956 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:58.956 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:58.956 Cannot find device "nvmf_tgt_br" 00:14:58.956 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:14:58.956 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:58.956 Cannot find device "nvmf_tgt_br2" 00:14:58.956 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:14:58.956 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:59.213 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:59.213 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:59.213 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:59.213 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:14:59.213 00:14:59.213 --- 10.0.0.2 ping statistics --- 00:14:59.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.213 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:59.213 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:59.213 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:14:59.213 00:14:59.213 --- 10.0.0.3 ping statistics --- 00:14:59.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.213 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:59.213 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:59.213 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:14:59.213 00:14:59.213 --- 10.0.0.1 ping statistics --- 00:14:59.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.213 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:59.213 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:59.471 16:00:52 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:14:59.471 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:59.471 16:00:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:59.471 16:00:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:59.471 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=83850 00:14:59.471 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:14:59.471 16:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 83850 00:14:59.471 16:00:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83850 ']' 00:14:59.471 16:00:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.471 16:00:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:59.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.471 16:00:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.471 16:00:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:59.471 16:00:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:59.471 [2024-07-15 16:00:53.001512] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:14:59.471 [2024-07-15 16:00:53.001612] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.471 [2024-07-15 16:00:53.136179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.728 [2024-07-15 16:00:53.280658] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:59.728 [2024-07-15 16:00:53.280727] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:59.728 [2024-07-15 16:00:53.280742] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:59.728 [2024-07-15 16:00:53.280756] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:59.728 [2024-07-15 16:00:53.280769] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:59.728 [2024-07-15 16:00:53.280805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:00.669 16:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:00.669 16:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:00.669 16:00:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:00.669 16:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:00.669 16:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:00.669 16:00:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:00.669 16:00:54 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:15:00.669 16:00:54 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:15:00.669 true 00:15:00.669 16:00:54 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:15:00.669 16:00:54 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:00.927 16:00:54 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:15:00.927 16:00:54 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:15:00.927 16:00:54 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:01.185 16:00:54 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:15:01.185 16:00:54 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:01.443 16:00:55 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:15:01.443 16:00:55 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:15:01.443 16:00:55 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:15:02.022 16:00:55 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:15:02.022 16:00:55 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:02.022 16:00:55 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:15:02.022 16:00:55 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:15:02.022 16:00:55 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:02.022 16:00:55 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:15:02.606 16:00:56 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:15:02.606 16:00:56 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:15:02.606 16:00:56 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:15:02.606 16:00:56 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:02.606 16:00:56 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:15:02.865 16:00:56 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:15:02.865 16:00:56 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:15:02.865 16:00:56 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:15:03.124 16:00:56 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:03.124 16:00:56 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:15:03.382 16:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:15:03.382 16:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:15:03.382 16:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:15:03.382 16:00:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:15:03.382 16:00:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:15:03.382 16:00:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:15:03.382 16:00:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:15:03.382 16:00:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:15:03.382 16:00:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:15:03.639 16:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:03.639 16:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:15:03.639 16:00:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:15:03.639 16:00:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:15:03.639 16:00:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:15:03.640 16:00:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:15:03.640 16:00:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:15:03.640 16:00:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:15:03.640 16:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:03.640 16:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:15:03.640 16:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.haDzfM17lX 00:15:03.640 16:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:15:03.640 16:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.aQrBs5MIUT 00:15:03.640 16:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:03.640 16:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:03.640 16:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.haDzfM17lX 00:15:03.640 16:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.aQrBs5MIUT 00:15:03.640 16:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:03.897 16:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:04.155 16:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.haDzfM17lX 00:15:04.155 16:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.haDzfM17lX 00:15:04.155 16:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:04.413 [2024-07-15 16:00:58.101850] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:04.413 16:00:58 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:04.979 16:00:58 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:05.236 [2024-07-15 16:00:58.742011] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:05.236 [2024-07-15 16:00:58.742237] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:05.236 16:00:58 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:05.494 malloc0 00:15:05.494 16:00:59 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:05.753 16:00:59 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.haDzfM17lX 00:15:06.019 [2024-07-15 16:00:59.645319] tcp.c:3710:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:06.019 16:00:59 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.haDzfM17lX 00:15:16.223 Initializing NVMe Controllers 00:15:16.223 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:16.223 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:16.223 Initialization complete. Launching workers. 00:15:16.223 ======================================================== 00:15:16.223 Latency(us) 00:15:16.223 Device Information : IOPS MiB/s Average min max 00:15:16.223 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9796.57 38.27 6542.32 2560.56 47539.68 00:15:16.223 ======================================================== 00:15:16.223 Total : 9796.57 38.27 6542.32 2560.56 47539.68 00:15:16.223 00:15:16.223 16:01:09 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.haDzfM17lX 00:15:16.223 16:01:09 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:16.223 16:01:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:16.223 16:01:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:16.223 16:01:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.haDzfM17lX' 00:15:16.223 16:01:09 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:16.223 16:01:09 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84208 00:15:16.223 16:01:09 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:16.223 16:01:09 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:16.223 16:01:09 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84208 /var/tmp/bdevperf.sock 00:15:16.223 16:01:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84208 ']' 00:15:16.223 16:01:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:16.223 16:01:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:16.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:16.224 16:01:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:16.224 16:01:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:16.224 16:01:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:16.526 [2024-07-15 16:01:09.978452] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:15:16.526 [2024-07-15 16:01:09.979236] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84208 ] 00:15:16.526 [2024-07-15 16:01:10.122233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.526 [2024-07-15 16:01:10.253304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:17.472 16:01:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:17.473 16:01:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:17.473 16:01:10 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.haDzfM17lX 00:15:17.730 [2024-07-15 16:01:11.234353] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:17.730 [2024-07-15 16:01:11.234492] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:17.730 TLSTESTn1 00:15:17.730 16:01:11 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:17.730 Running I/O for 10 seconds... 00:15:29.925 00:15:29.925 Latency(us) 00:15:29.925 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.925 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:29.925 Verification LBA range: start 0x0 length 0x2000 00:15:29.925 TLSTESTn1 : 10.02 4009.44 15.66 0.00 0.00 31860.66 6613.18 24188.74 00:15:29.925 =================================================================================================================== 00:15:29.925 Total : 4009.44 15.66 0.00 0.00 31860.66 6613.18 24188.74 00:15:29.925 0 00:15:29.925 16:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:29.925 16:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 84208 00:15:29.925 16:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84208 ']' 00:15:29.925 16:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84208 00:15:29.925 16:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:29.925 16:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:29.925 16:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84208 00:15:29.925 16:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:29.925 16:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:29.925 killing process with pid 84208 00:15:29.925 16:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84208' 00:15:29.925 16:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84208 00:15:29.925 Received shutdown signal, test time was about 10.000000 seconds 00:15:29.925 00:15:29.925 Latency(us) 00:15:29.925 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.925 =================================================================================================================== 00:15:29.925 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:29.925 16:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84208 00:15:29.925 [2024-07-15 16:01:21.512107] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:29.925 16:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aQrBs5MIUT 00:15:29.925 16:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:29.925 16:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aQrBs5MIUT 00:15:29.925 16:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:29.925 16:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:29.925 16:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:29.925 16:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:29.925 16:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aQrBs5MIUT 00:15:29.925 16:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:29.925 16:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:29.925 16:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:29.925 16:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.aQrBs5MIUT' 00:15:29.925 16:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:29.925 16:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84359 00:15:29.925 16:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:29.925 16:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:29.925 16:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84359 /var/tmp/bdevperf.sock 00:15:29.925 16:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84359 ']' 00:15:29.925 16:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:29.925 16:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:29.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:29.925 16:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:29.925 16:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:29.925 16:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:29.925 [2024-07-15 16:01:21.819702] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:15:29.925 [2024-07-15 16:01:21.819808] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84359 ] 00:15:29.925 [2024-07-15 16:01:21.951602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.925 [2024-07-15 16:01:22.074449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:29.925 16:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:29.925 16:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:29.925 16:01:22 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aQrBs5MIUT 00:15:29.925 [2024-07-15 16:01:23.058798] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:29.925 [2024-07-15 16:01:23.058934] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:29.925 [2024-07-15 16:01:23.067140] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:29.925 [2024-07-15 16:01:23.067788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x76eca0 (107): Transport endpoint is not connected 00:15:29.925 [2024-07-15 16:01:23.068773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x76eca0 (9): Bad file descriptor 00:15:29.925 [2024-07-15 16:01:23.069768] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:29.925 [2024-07-15 16:01:23.069799] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:29.925 [2024-07-15 16:01:23.069816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:29.925 2024/07/15 16:01:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.aQrBs5MIUT subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:29.925 request: 00:15:29.925 { 00:15:29.925 "method": "bdev_nvme_attach_controller", 00:15:29.925 "params": { 00:15:29.925 "name": "TLSTEST", 00:15:29.925 "trtype": "tcp", 00:15:29.925 "traddr": "10.0.0.2", 00:15:29.925 "adrfam": "ipv4", 00:15:29.925 "trsvcid": "4420", 00:15:29.925 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:29.925 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:29.925 "prchk_reftag": false, 00:15:29.925 "prchk_guard": false, 00:15:29.925 "hdgst": false, 00:15:29.925 "ddgst": false, 00:15:29.925 "psk": "/tmp/tmp.aQrBs5MIUT" 00:15:29.925 } 00:15:29.925 } 00:15:29.925 Got JSON-RPC error response 00:15:29.925 GoRPCClient: error on JSON-RPC call 00:15:29.925 16:01:23 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84359 00:15:29.925 16:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84359 ']' 00:15:29.925 16:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84359 00:15:29.925 16:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:29.925 16:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:29.925 16:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84359 00:15:29.925 16:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:29.925 killing process with pid 84359 00:15:29.926 16:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:29.926 16:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84359' 00:15:29.926 16:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84359 00:15:29.926 16:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84359 00:15:29.926 Received shutdown signal, test time was about 10.000000 seconds 00:15:29.926 00:15:29.926 Latency(us) 00:15:29.926 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.926 =================================================================================================================== 00:15:29.926 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:29.926 [2024-07-15 16:01:23.126121] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:29.926 16:01:23 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:29.926 16:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:29.926 16:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:29.926 16:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:29.926 16:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:29.926 16:01:23 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.haDzfM17lX 00:15:29.926 16:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:29.926 16:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.haDzfM17lX 00:15:29.926 16:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:29.926 16:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:29.926 16:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:29.926 16:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:29.926 16:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.haDzfM17lX 00:15:29.926 16:01:23 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:29.926 16:01:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:29.926 16:01:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:15:29.926 16:01:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.haDzfM17lX' 00:15:29.926 16:01:23 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:29.926 16:01:23 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84405 00:15:29.926 16:01:23 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:29.926 16:01:23 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84405 /var/tmp/bdevperf.sock 00:15:29.926 16:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84405 ']' 00:15:29.926 16:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:29.926 16:01:23 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:29.926 16:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:29.926 16:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:29.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:29.926 16:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:29.926 16:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:29.926 [2024-07-15 16:01:23.459399] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:15:29.926 [2024-07-15 16:01:23.459562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84405 ] 00:15:29.926 [2024-07-15 16:01:23.603719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.184 [2024-07-15 16:01:23.732061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:30.750 16:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:30.750 16:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:30.750 16:01:24 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.haDzfM17lX 00:15:31.008 [2024-07-15 16:01:24.659585] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:31.008 [2024-07-15 16:01:24.659711] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:31.008 [2024-07-15 16:01:24.667190] tcp.c: 918:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:31.008 [2024-07-15 16:01:24.667236] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:31.008 [2024-07-15 16:01:24.667291] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:31.008 [2024-07-15 16:01:24.667718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d5ca0 (107): Transport endpoint is not connected 00:15:31.008 [2024-07-15 16:01:24.668699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d5ca0 (9): Bad file descriptor 00:15:31.008 [2024-07-15 16:01:24.669694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:31.008 [2024-07-15 16:01:24.669724] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:31.008 [2024-07-15 16:01:24.669741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:31.008 2024/07/15 16:01:24 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.haDzfM17lX subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:31.008 request: 00:15:31.008 { 00:15:31.008 "method": "bdev_nvme_attach_controller", 00:15:31.008 "params": { 00:15:31.008 "name": "TLSTEST", 00:15:31.008 "trtype": "tcp", 00:15:31.008 "traddr": "10.0.0.2", 00:15:31.008 "adrfam": "ipv4", 00:15:31.008 "trsvcid": "4420", 00:15:31.008 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:31.008 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:15:31.008 "prchk_reftag": false, 00:15:31.008 "prchk_guard": false, 00:15:31.008 "hdgst": false, 00:15:31.008 "ddgst": false, 00:15:31.008 "psk": "/tmp/tmp.haDzfM17lX" 00:15:31.008 } 00:15:31.008 } 00:15:31.008 Got JSON-RPC error response 00:15:31.008 GoRPCClient: error on JSON-RPC call 00:15:31.008 16:01:24 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84405 00:15:31.008 16:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84405 ']' 00:15:31.008 16:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84405 00:15:31.008 16:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:31.008 16:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:31.008 16:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84405 00:15:31.008 16:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:31.008 16:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:31.008 16:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84405' 00:15:31.008 killing process with pid 84405 00:15:31.008 16:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84405 00:15:31.008 16:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84405 00:15:31.008 Received shutdown signal, test time was about 10.000000 seconds 00:15:31.008 00:15:31.008 Latency(us) 00:15:31.008 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.008 =================================================================================================================== 00:15:31.008 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:31.008 [2024-07-15 16:01:24.720560] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:31.266 16:01:24 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:31.266 16:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:31.266 16:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:31.266 16:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:31.266 16:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:31.266 16:01:24 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.haDzfM17lX 00:15:31.266 16:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:31.266 16:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.haDzfM17lX 00:15:31.266 16:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:31.266 16:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:31.266 16:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:31.266 16:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:31.266 16:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.haDzfM17lX 00:15:31.266 16:01:24 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:31.266 16:01:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:15:31.266 16:01:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:31.266 16:01:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.haDzfM17lX' 00:15:31.266 16:01:24 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:31.266 16:01:24 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84449 00:15:31.266 16:01:24 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:31.266 16:01:24 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:31.266 16:01:24 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84449 /var/tmp/bdevperf.sock 00:15:31.266 16:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84449 ']' 00:15:31.266 16:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:31.266 16:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:31.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:31.266 16:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:31.266 16:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:31.266 16:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:31.524 [2024-07-15 16:01:25.002731] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:15:31.524 [2024-07-15 16:01:25.002831] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84449 ] 00:15:31.524 [2024-07-15 16:01:25.137008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.782 [2024-07-15 16:01:25.261463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:32.348 16:01:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:32.348 16:01:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:32.348 16:01:25 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.haDzfM17lX 00:15:32.606 [2024-07-15 16:01:26.294369] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:32.606 [2024-07-15 16:01:26.295209] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:32.606 [2024-07-15 16:01:26.300577] tcp.c: 918:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:15:32.606 [2024-07-15 16:01:26.301047] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:15:32.606 [2024-07-15 16:01:26.301339] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:32.606 [2024-07-15 16:01:26.302944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x186fca0 (107): Transport endpoint is not connected 00:15:32.606 [2024-07-15 16:01:26.303907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x186fca0 (9): Bad file descriptor 00:15:32.606 [2024-07-15 16:01:26.304903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:15:32.606 [2024-07-15 16:01:26.305041] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:32.606 [2024-07-15 16:01:26.305403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:15:32.606 2024/07/15 16:01:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.haDzfM17lX subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:32.606 request: 00:15:32.606 { 00:15:32.606 "method": "bdev_nvme_attach_controller", 00:15:32.606 "params": { 00:15:32.606 "name": "TLSTEST", 00:15:32.606 "trtype": "tcp", 00:15:32.606 "traddr": "10.0.0.2", 00:15:32.606 "adrfam": "ipv4", 00:15:32.606 "trsvcid": "4420", 00:15:32.606 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:15:32.606 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:32.606 "prchk_reftag": false, 00:15:32.606 "prchk_guard": false, 00:15:32.606 "hdgst": false, 00:15:32.606 "ddgst": false, 00:15:32.606 "psk": "/tmp/tmp.haDzfM17lX" 00:15:32.606 } 00:15:32.606 } 00:15:32.606 Got JSON-RPC error response 00:15:32.606 GoRPCClient: error on JSON-RPC call 00:15:32.606 16:01:26 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84449 00:15:32.606 16:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84449 ']' 00:15:32.606 16:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84449 00:15:32.864 16:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:32.864 16:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:32.864 16:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84449 00:15:32.864 16:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:32.864 16:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:32.864 16:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84449' 00:15:32.864 killing process with pid 84449 00:15:32.864 16:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84449 00:15:32.864 16:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84449 00:15:32.864 Received shutdown signal, test time was about 10.000000 seconds 00:15:32.864 00:15:32.864 Latency(us) 00:15:32.864 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.864 =================================================================================================================== 00:15:32.864 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:32.864 [2024-07-15 16:01:26.356265] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:32.864 16:01:26 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:32.864 16:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:32.864 16:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:32.864 16:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:32.864 16:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:32.864 16:01:26 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:32.864 16:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:32.864 16:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:32.864 16:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:32.864 16:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:32.864 16:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:32.864 16:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:32.864 16:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:32.864 16:01:26 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:32.864 16:01:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:32.864 16:01:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:32.864 16:01:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:15:32.864 16:01:26 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:32.864 16:01:26 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84496 00:15:32.864 16:01:26 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:32.864 16:01:26 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:32.864 16:01:26 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84496 /var/tmp/bdevperf.sock 00:15:32.864 16:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84496 ']' 00:15:32.864 16:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:33.122 16:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:33.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:33.122 16:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:33.122 16:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:33.122 16:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:33.122 [2024-07-15 16:01:26.651903] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:15:33.122 [2024-07-15 16:01:26.652046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84496 ] 00:15:33.122 [2024-07-15 16:01:26.787822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.380 [2024-07-15 16:01:26.909892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:33.944 16:01:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:33.944 16:01:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:33.944 16:01:27 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:34.202 [2024-07-15 16:01:27.897848] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:34.202 [2024-07-15 16:01:27.899504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa3e240 (9): Bad file descriptor 00:15:34.202 [2024-07-15 16:01:27.900499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:34.202 [2024-07-15 16:01:27.900544] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:34.202 [2024-07-15 16:01:27.900560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:34.202 2024/07/15 16:01:27 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:34.202 request: 00:15:34.202 { 00:15:34.202 "method": "bdev_nvme_attach_controller", 00:15:34.202 "params": { 00:15:34.202 "name": "TLSTEST", 00:15:34.202 "trtype": "tcp", 00:15:34.202 "traddr": "10.0.0.2", 00:15:34.202 "adrfam": "ipv4", 00:15:34.202 "trsvcid": "4420", 00:15:34.202 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:34.202 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:34.202 "prchk_reftag": false, 00:15:34.202 "prchk_guard": false, 00:15:34.202 "hdgst": false, 00:15:34.202 "ddgst": false 00:15:34.202 } 00:15:34.202 } 00:15:34.202 Got JSON-RPC error response 00:15:34.202 GoRPCClient: error on JSON-RPC call 00:15:34.202 16:01:27 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84496 00:15:34.202 16:01:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84496 ']' 00:15:34.202 16:01:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84496 00:15:34.202 16:01:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:34.202 16:01:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:34.202 16:01:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84496 00:15:34.460 16:01:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:34.460 16:01:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:34.460 killing process with pid 84496 00:15:34.460 16:01:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84496' 00:15:34.460 16:01:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84496 00:15:34.460 16:01:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84496 00:15:34.460 Received shutdown signal, test time was about 10.000000 seconds 00:15:34.460 00:15:34.460 Latency(us) 00:15:34.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:34.460 =================================================================================================================== 00:15:34.460 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:34.460 16:01:28 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:34.460 16:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:34.460 16:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:34.460 16:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:34.460 16:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:34.460 16:01:28 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 83850 00:15:34.460 16:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83850 ']' 00:15:34.460 16:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83850 00:15:34.460 16:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:34.460 16:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:34.460 16:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83850 00:15:34.718 16:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:34.718 16:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:34.718 killing process with pid 83850 00:15:34.718 16:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83850' 00:15:34.718 16:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83850 00:15:34.718 [2024-07-15 16:01:28.203634] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:34.718 16:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83850 00:15:34.976 16:01:28 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:15:34.976 16:01:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:15:34.976 16:01:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:15:34.976 16:01:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:15:34.976 16:01:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:15:34.976 16:01:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:15:34.976 16:01:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:15:34.976 16:01:28 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:15:34.976 16:01:28 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:15:34.977 16:01:28 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.JgefRAlb9G 00:15:34.977 16:01:28 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:15:34.977 16:01:28 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.JgefRAlb9G 00:15:34.977 16:01:28 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:15:34.977 16:01:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:34.977 16:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:34.977 16:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:34.977 16:01:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84557 00:15:34.977 16:01:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:34.977 16:01:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84557 00:15:34.977 16:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84557 ']' 00:15:34.977 16:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.977 16:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:34.977 16:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.977 16:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:34.977 16:01:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:34.977 [2024-07-15 16:01:28.576094] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:15:34.977 [2024-07-15 16:01:28.576206] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:35.234 [2024-07-15 16:01:28.706911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.234 [2024-07-15 16:01:28.826663] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:35.234 [2024-07-15 16:01:28.826731] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:35.234 [2024-07-15 16:01:28.826743] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:35.234 [2024-07-15 16:01:28.826752] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:35.234 [2024-07-15 16:01:28.826760] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:35.234 [2024-07-15 16:01:28.826787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.167 16:01:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:36.167 16:01:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:36.167 16:01:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:36.167 16:01:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:36.167 16:01:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:36.167 16:01:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:36.167 16:01:29 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.JgefRAlb9G 00:15:36.167 16:01:29 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.JgefRAlb9G 00:15:36.167 16:01:29 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:36.167 [2024-07-15 16:01:29.834842] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:36.167 16:01:29 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:36.425 16:01:30 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:36.682 [2024-07-15 16:01:30.350936] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:36.682 [2024-07-15 16:01:30.351174] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:36.682 16:01:30 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:36.939 malloc0 00:15:36.939 16:01:30 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:37.196 16:01:30 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JgefRAlb9G 00:15:37.454 [2024-07-15 16:01:31.122451] tcp.c:3710:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:37.454 16:01:31 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JgefRAlb9G 00:15:37.454 16:01:31 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:37.454 16:01:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:37.454 16:01:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:37.454 16:01:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.JgefRAlb9G' 00:15:37.454 16:01:31 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:37.454 16:01:31 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84654 00:15:37.454 16:01:31 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:37.454 16:01:31 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:37.454 16:01:31 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84654 /var/tmp/bdevperf.sock 00:15:37.454 16:01:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84654 ']' 00:15:37.454 16:01:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:37.454 16:01:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:37.454 16:01:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:37.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:37.454 16:01:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:37.454 16:01:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:37.712 [2024-07-15 16:01:31.193031] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:15:37.712 [2024-07-15 16:01:31.193127] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84654 ] 00:15:37.712 [2024-07-15 16:01:31.327239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.969 [2024-07-15 16:01:31.448284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:38.534 16:01:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:38.534 16:01:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:38.534 16:01:32 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JgefRAlb9G 00:15:38.791 [2024-07-15 16:01:32.385524] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:38.791 [2024-07-15 16:01:32.385654] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:38.791 TLSTESTn1 00:15:38.791 16:01:32 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:39.048 Running I/O for 10 seconds... 00:15:49.053 00:15:49.053 Latency(us) 00:15:49.053 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:49.053 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:49.053 Verification LBA range: start 0x0 length 0x2000 00:15:49.053 TLSTESTn1 : 10.02 3833.03 14.97 0.00 0.00 33328.52 6791.91 36461.85 00:15:49.053 =================================================================================================================== 00:15:49.053 Total : 3833.03 14.97 0.00 0.00 33328.52 6791.91 36461.85 00:15:49.053 0 00:15:49.053 16:01:42 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:49.053 16:01:42 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 84654 00:15:49.053 16:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84654 ']' 00:15:49.053 16:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84654 00:15:49.053 16:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:49.053 16:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:49.053 16:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84654 00:15:49.053 killing process with pid 84654 00:15:49.053 Received shutdown signal, test time was about 10.000000 seconds 00:15:49.053 00:15:49.053 Latency(us) 00:15:49.053 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:49.053 =================================================================================================================== 00:15:49.053 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:49.053 16:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:49.053 16:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:49.053 16:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84654' 00:15:49.053 16:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84654 00:15:49.053 [2024-07-15 16:01:42.654170] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:49.053 16:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84654 00:15:49.311 16:01:42 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.JgefRAlb9G 00:15:49.311 16:01:42 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JgefRAlb9G 00:15:49.311 16:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:49.311 16:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JgefRAlb9G 00:15:49.311 16:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:49.311 16:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:49.311 16:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:49.311 16:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:49.311 16:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JgefRAlb9G 00:15:49.311 16:01:42 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:49.311 16:01:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:49.311 16:01:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:49.311 16:01:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.JgefRAlb9G' 00:15:49.311 16:01:42 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:49.311 16:01:42 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84807 00:15:49.311 16:01:42 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:49.311 16:01:42 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:49.311 16:01:42 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84807 /var/tmp/bdevperf.sock 00:15:49.311 16:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84807 ']' 00:15:49.311 16:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:49.311 16:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:49.311 16:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:49.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:49.311 16:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:49.311 16:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:49.311 [2024-07-15 16:01:42.953774] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:15:49.311 [2024-07-15 16:01:42.953977] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84807 ] 00:15:49.568 [2024-07-15 16:01:43.092256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.568 [2024-07-15 16:01:43.220860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:50.500 16:01:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:50.500 16:01:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:50.500 16:01:43 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JgefRAlb9G 00:15:50.500 [2024-07-15 16:01:44.222741] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:50.501 [2024-07-15 16:01:44.222846] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:15:50.501 [2024-07-15 16:01:44.222865] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.JgefRAlb9G 00:15:50.501 2024/07/15 16:01:44 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.JgefRAlb9G subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:15:50.780 request: 00:15:50.780 { 00:15:50.780 "method": "bdev_nvme_attach_controller", 00:15:50.780 "params": { 00:15:50.780 "name": "TLSTEST", 00:15:50.780 "trtype": "tcp", 00:15:50.780 "traddr": "10.0.0.2", 00:15:50.780 "adrfam": "ipv4", 00:15:50.780 "trsvcid": "4420", 00:15:50.780 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:50.780 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:50.780 "prchk_reftag": false, 00:15:50.780 "prchk_guard": false, 00:15:50.780 "hdgst": false, 00:15:50.780 "ddgst": false, 00:15:50.780 "psk": "/tmp/tmp.JgefRAlb9G" 00:15:50.780 } 00:15:50.780 } 00:15:50.780 Got JSON-RPC error response 00:15:50.780 GoRPCClient: error on JSON-RPC call 00:15:50.780 16:01:44 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84807 00:15:50.780 16:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84807 ']' 00:15:50.780 16:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84807 00:15:50.780 16:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:50.780 16:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:50.780 16:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84807 00:15:50.780 killing process with pid 84807 00:15:50.780 Received shutdown signal, test time was about 10.000000 seconds 00:15:50.780 00:15:50.780 Latency(us) 00:15:50.780 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.780 =================================================================================================================== 00:15:50.780 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:50.780 16:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:50.780 16:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:50.780 16:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84807' 00:15:50.780 16:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84807 00:15:50.780 16:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84807 00:15:51.065 16:01:44 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:51.065 16:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:51.065 16:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:51.065 16:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:51.065 16:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:51.065 16:01:44 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 84557 00:15:51.065 16:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84557 ']' 00:15:51.065 16:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84557 00:15:51.065 16:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:51.065 16:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:51.065 16:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84557 00:15:51.065 killing process with pid 84557 00:15:51.065 16:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:51.065 16:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:51.065 16:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84557' 00:15:51.065 16:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84557 00:15:51.065 [2024-07-15 16:01:44.544473] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:51.065 16:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84557 00:15:51.065 16:01:44 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:15:51.065 16:01:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:51.065 16:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:51.065 16:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:51.323 16:01:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84858 00:15:51.323 16:01:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:51.323 16:01:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84858 00:15:51.323 16:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84858 ']' 00:15:51.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.323 16:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.323 16:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:51.323 16:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.323 16:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:51.323 16:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:51.323 [2024-07-15 16:01:44.872220] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:15:51.323 [2024-07-15 16:01:44.872369] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:51.323 [2024-07-15 16:01:45.012138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.580 [2024-07-15 16:01:45.150092] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:51.580 [2024-07-15 16:01:45.150165] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:51.580 [2024-07-15 16:01:45.150177] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:51.580 [2024-07-15 16:01:45.150186] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:51.580 [2024-07-15 16:01:45.150193] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:51.580 [2024-07-15 16:01:45.150225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.513 16:01:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:52.513 16:01:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:52.513 16:01:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:52.513 16:01:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:52.513 16:01:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:52.513 16:01:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:52.513 16:01:45 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.JgefRAlb9G 00:15:52.513 16:01:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:52.513 16:01:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.JgefRAlb9G 00:15:52.513 16:01:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:15:52.513 16:01:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:52.513 16:01:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:15:52.513 16:01:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:52.513 16:01:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.JgefRAlb9G 00:15:52.513 16:01:45 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.JgefRAlb9G 00:15:52.513 16:01:45 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:52.772 [2024-07-15 16:01:46.242284] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:52.772 16:01:46 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:53.029 16:01:46 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:53.288 [2024-07-15 16:01:46.782495] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:53.288 [2024-07-15 16:01:46.782843] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:53.288 16:01:46 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:53.547 malloc0 00:15:53.547 16:01:47 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:53.805 16:01:47 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JgefRAlb9G 00:15:54.063 [2024-07-15 16:01:47.635112] tcp.c:3620:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:15:54.063 [2024-07-15 16:01:47.635192] tcp.c:3706:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:15:54.063 [2024-07-15 16:01:47.635241] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:15:54.063 2024/07/15 16:01:47 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.JgefRAlb9G], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:15:54.063 request: 00:15:54.063 { 00:15:54.063 "method": "nvmf_subsystem_add_host", 00:15:54.063 "params": { 00:15:54.063 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:54.063 "host": "nqn.2016-06.io.spdk:host1", 00:15:54.063 "psk": "/tmp/tmp.JgefRAlb9G" 00:15:54.063 } 00:15:54.063 } 00:15:54.063 Got JSON-RPC error response 00:15:54.063 GoRPCClient: error on JSON-RPC call 00:15:54.063 16:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:54.063 16:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:54.063 16:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:54.063 16:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:54.063 16:01:47 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 84858 00:15:54.063 16:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84858 ']' 00:15:54.063 16:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84858 00:15:54.063 16:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:54.063 16:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:54.063 16:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84858 00:15:54.063 16:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:54.063 killing process with pid 84858 00:15:54.063 16:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:54.063 16:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84858' 00:15:54.063 16:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84858 00:15:54.063 16:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84858 00:15:54.629 16:01:48 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.JgefRAlb9G 00:15:54.629 16:01:48 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:15:54.629 16:01:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:54.629 16:01:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:54.629 16:01:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:54.629 16:01:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84974 00:15:54.629 16:01:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84974 00:15:54.629 16:01:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:54.629 16:01:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84974 ']' 00:15:54.629 16:01:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.629 16:01:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:54.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.629 16:01:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.629 16:01:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:54.629 16:01:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:54.629 [2024-07-15 16:01:48.138881] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:15:54.629 [2024-07-15 16:01:48.139064] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:54.629 [2024-07-15 16:01:48.284257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.893 [2024-07-15 16:01:48.438594] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:54.893 [2024-07-15 16:01:48.438667] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:54.894 [2024-07-15 16:01:48.438680] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:54.894 [2024-07-15 16:01:48.438689] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:54.894 [2024-07-15 16:01:48.438697] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:54.894 [2024-07-15 16:01:48.438733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.467 16:01:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:55.467 16:01:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:55.467 16:01:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:55.467 16:01:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:55.467 16:01:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:55.467 16:01:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:55.467 16:01:49 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.JgefRAlb9G 00:15:55.467 16:01:49 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.JgefRAlb9G 00:15:55.467 16:01:49 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:55.725 [2024-07-15 16:01:49.419985] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:55.725 16:01:49 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:55.983 16:01:49 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:56.241 [2024-07-15 16:01:49.940085] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:56.241 [2024-07-15 16:01:49.940332] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:56.241 16:01:49 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:56.808 malloc0 00:15:56.808 16:01:50 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:57.066 16:01:50 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JgefRAlb9G 00:15:57.325 [2024-07-15 16:01:50.948020] tcp.c:3710:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:57.325 16:01:50 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=85081 00:15:57.325 16:01:50 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:57.325 16:01:50 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:57.325 16:01:50 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 85081 /var/tmp/bdevperf.sock 00:15:57.325 16:01:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85081 ']' 00:15:57.325 16:01:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:57.325 16:01:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:57.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:57.325 16:01:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:57.325 16:01:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:57.325 16:01:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:57.325 [2024-07-15 16:01:51.040694] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:15:57.325 [2024-07-15 16:01:51.040854] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85081 ] 00:15:57.583 [2024-07-15 16:01:51.181810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.841 [2024-07-15 16:01:51.330713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:58.775 16:01:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:58.775 16:01:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:58.775 16:01:52 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JgefRAlb9G 00:15:58.775 [2024-07-15 16:01:52.418024] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:58.775 [2024-07-15 16:01:52.418161] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:58.775 TLSTESTn1 00:15:59.034 16:01:52 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:59.292 16:01:52 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:15:59.292 "subsystems": [ 00:15:59.292 { 00:15:59.292 "subsystem": "keyring", 00:15:59.292 "config": [] 00:15:59.292 }, 00:15:59.292 { 00:15:59.292 "subsystem": "iobuf", 00:15:59.292 "config": [ 00:15:59.292 { 00:15:59.292 "method": "iobuf_set_options", 00:15:59.292 "params": { 00:15:59.292 "large_bufsize": 135168, 00:15:59.292 "large_pool_count": 1024, 00:15:59.292 "small_bufsize": 8192, 00:15:59.292 "small_pool_count": 8192 00:15:59.292 } 00:15:59.292 } 00:15:59.292 ] 00:15:59.292 }, 00:15:59.292 { 00:15:59.292 "subsystem": "sock", 00:15:59.292 "config": [ 00:15:59.292 { 00:15:59.292 "method": "sock_set_default_impl", 00:15:59.292 "params": { 00:15:59.292 "impl_name": "posix" 00:15:59.292 } 00:15:59.292 }, 00:15:59.292 { 00:15:59.292 "method": "sock_impl_set_options", 00:15:59.292 "params": { 00:15:59.292 "enable_ktls": false, 00:15:59.292 "enable_placement_id": 0, 00:15:59.292 "enable_quickack": false, 00:15:59.292 "enable_recv_pipe": true, 00:15:59.292 "enable_zerocopy_send_client": false, 00:15:59.292 "enable_zerocopy_send_server": true, 00:15:59.292 "impl_name": "ssl", 00:15:59.292 "recv_buf_size": 4096, 00:15:59.292 "send_buf_size": 4096, 00:15:59.292 "tls_version": 0, 00:15:59.292 "zerocopy_threshold": 0 00:15:59.292 } 00:15:59.292 }, 00:15:59.292 { 00:15:59.292 "method": "sock_impl_set_options", 00:15:59.292 "params": { 00:15:59.292 "enable_ktls": false, 00:15:59.293 "enable_placement_id": 0, 00:15:59.293 "enable_quickack": false, 00:15:59.293 "enable_recv_pipe": true, 00:15:59.293 "enable_zerocopy_send_client": false, 00:15:59.293 "enable_zerocopy_send_server": true, 00:15:59.293 "impl_name": "posix", 00:15:59.293 "recv_buf_size": 2097152, 00:15:59.293 "send_buf_size": 2097152, 00:15:59.293 "tls_version": 0, 00:15:59.293 "zerocopy_threshold": 0 00:15:59.293 } 00:15:59.293 } 00:15:59.293 ] 00:15:59.293 }, 00:15:59.293 { 00:15:59.293 "subsystem": "vmd", 00:15:59.293 "config": [] 00:15:59.293 }, 00:15:59.293 { 00:15:59.293 "subsystem": "accel", 00:15:59.293 "config": [ 00:15:59.293 { 00:15:59.293 "method": "accel_set_options", 00:15:59.293 "params": { 00:15:59.293 "buf_count": 2048, 00:15:59.293 "large_cache_size": 16, 00:15:59.293 "sequence_count": 2048, 00:15:59.293 "small_cache_size": 128, 00:15:59.293 "task_count": 2048 00:15:59.293 } 00:15:59.293 } 00:15:59.293 ] 00:15:59.293 }, 00:15:59.293 { 00:15:59.293 "subsystem": "bdev", 00:15:59.293 "config": [ 00:15:59.293 { 00:15:59.293 "method": "bdev_set_options", 00:15:59.293 "params": { 00:15:59.293 "bdev_auto_examine": true, 00:15:59.293 "bdev_io_cache_size": 256, 00:15:59.293 "bdev_io_pool_size": 65535, 00:15:59.293 "iobuf_large_cache_size": 16, 00:15:59.293 "iobuf_small_cache_size": 128 00:15:59.293 } 00:15:59.293 }, 00:15:59.293 { 00:15:59.293 "method": "bdev_raid_set_options", 00:15:59.293 "params": { 00:15:59.293 "process_window_size_kb": 1024 00:15:59.293 } 00:15:59.293 }, 00:15:59.293 { 00:15:59.293 "method": "bdev_iscsi_set_options", 00:15:59.293 "params": { 00:15:59.293 "timeout_sec": 30 00:15:59.293 } 00:15:59.293 }, 00:15:59.293 { 00:15:59.293 "method": "bdev_nvme_set_options", 00:15:59.293 "params": { 00:15:59.293 "action_on_timeout": "none", 00:15:59.293 "allow_accel_sequence": false, 00:15:59.293 "arbitration_burst": 0, 00:15:59.293 "bdev_retry_count": 3, 00:15:59.293 "ctrlr_loss_timeout_sec": 0, 00:15:59.293 "delay_cmd_submit": true, 00:15:59.293 "dhchap_dhgroups": [ 00:15:59.293 "null", 00:15:59.293 "ffdhe2048", 00:15:59.293 "ffdhe3072", 00:15:59.293 "ffdhe4096", 00:15:59.293 "ffdhe6144", 00:15:59.293 "ffdhe8192" 00:15:59.293 ], 00:15:59.293 "dhchap_digests": [ 00:15:59.293 "sha256", 00:15:59.293 "sha384", 00:15:59.293 "sha512" 00:15:59.293 ], 00:15:59.293 "disable_auto_failback": false, 00:15:59.293 "fast_io_fail_timeout_sec": 0, 00:15:59.293 "generate_uuids": false, 00:15:59.293 "high_priority_weight": 0, 00:15:59.293 "io_path_stat": false, 00:15:59.293 "io_queue_requests": 0, 00:15:59.293 "keep_alive_timeout_ms": 10000, 00:15:59.293 "low_priority_weight": 0, 00:15:59.293 "medium_priority_weight": 0, 00:15:59.293 "nvme_adminq_poll_period_us": 10000, 00:15:59.293 "nvme_error_stat": false, 00:15:59.293 "nvme_ioq_poll_period_us": 0, 00:15:59.293 "rdma_cm_event_timeout_ms": 0, 00:15:59.293 "rdma_max_cq_size": 0, 00:15:59.293 "rdma_srq_size": 0, 00:15:59.293 "reconnect_delay_sec": 0, 00:15:59.293 "timeout_admin_us": 0, 00:15:59.293 "timeout_us": 0, 00:15:59.293 "transport_ack_timeout": 0, 00:15:59.293 "transport_retry_count": 4, 00:15:59.293 "transport_tos": 0 00:15:59.293 } 00:15:59.293 }, 00:15:59.293 { 00:15:59.293 "method": "bdev_nvme_set_hotplug", 00:15:59.293 "params": { 00:15:59.293 "enable": false, 00:15:59.293 "period_us": 100000 00:15:59.293 } 00:15:59.293 }, 00:15:59.293 { 00:15:59.293 "method": "bdev_malloc_create", 00:15:59.293 "params": { 00:15:59.293 "block_size": 4096, 00:15:59.293 "name": "malloc0", 00:15:59.293 "num_blocks": 8192, 00:15:59.293 "optimal_io_boundary": 0, 00:15:59.293 "physical_block_size": 4096, 00:15:59.293 "uuid": "f6585850-40d9-404a-8f49-76221a492722" 00:15:59.293 } 00:15:59.293 }, 00:15:59.293 { 00:15:59.293 "method": "bdev_wait_for_examine" 00:15:59.293 } 00:15:59.293 ] 00:15:59.293 }, 00:15:59.293 { 00:15:59.293 "subsystem": "nbd", 00:15:59.293 "config": [] 00:15:59.293 }, 00:15:59.293 { 00:15:59.293 "subsystem": "scheduler", 00:15:59.293 "config": [ 00:15:59.293 { 00:15:59.293 "method": "framework_set_scheduler", 00:15:59.293 "params": { 00:15:59.293 "name": "static" 00:15:59.293 } 00:15:59.293 } 00:15:59.293 ] 00:15:59.293 }, 00:15:59.293 { 00:15:59.293 "subsystem": "nvmf", 00:15:59.293 "config": [ 00:15:59.293 { 00:15:59.293 "method": "nvmf_set_config", 00:15:59.293 "params": { 00:15:59.293 "admin_cmd_passthru": { 00:15:59.293 "identify_ctrlr": false 00:15:59.293 }, 00:15:59.293 "discovery_filter": "match_any" 00:15:59.293 } 00:15:59.293 }, 00:15:59.293 { 00:15:59.293 "method": "nvmf_set_max_subsystems", 00:15:59.293 "params": { 00:15:59.293 "max_subsystems": 1024 00:15:59.293 } 00:15:59.293 }, 00:15:59.293 { 00:15:59.293 "method": "nvmf_set_crdt", 00:15:59.293 "params": { 00:15:59.293 "crdt1": 0, 00:15:59.293 "crdt2": 0, 00:15:59.293 "crdt3": 0 00:15:59.293 } 00:15:59.293 }, 00:15:59.293 { 00:15:59.293 "method": "nvmf_create_transport", 00:15:59.293 "params": { 00:15:59.293 "abort_timeout_sec": 1, 00:15:59.293 "ack_timeout": 0, 00:15:59.293 "buf_cache_size": 4294967295, 00:15:59.293 "c2h_success": false, 00:15:59.293 "data_wr_pool_size": 0, 00:15:59.293 "dif_insert_or_strip": false, 00:15:59.293 "in_capsule_data_size": 4096, 00:15:59.293 "io_unit_size": 131072, 00:15:59.293 "max_aq_depth": 128, 00:15:59.293 "max_io_qpairs_per_ctrlr": 127, 00:15:59.293 "max_io_size": 131072, 00:15:59.293 "max_queue_depth": 128, 00:15:59.293 "num_shared_buffers": 511, 00:15:59.293 "sock_priority": 0, 00:15:59.293 "trtype": "TCP", 00:15:59.293 "zcopy": false 00:15:59.293 } 00:15:59.293 }, 00:15:59.293 { 00:15:59.293 "method": "nvmf_create_subsystem", 00:15:59.293 "params": { 00:15:59.293 "allow_any_host": false, 00:15:59.293 "ana_reporting": false, 00:15:59.293 "max_cntlid": 65519, 00:15:59.293 "max_namespaces": 10, 00:15:59.293 "min_cntlid": 1, 00:15:59.293 "model_number": "SPDK bdev Controller", 00:15:59.293 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:59.293 "serial_number": "SPDK00000000000001" 00:15:59.293 } 00:15:59.293 }, 00:15:59.293 { 00:15:59.293 "method": "nvmf_subsystem_add_host", 00:15:59.293 "params": { 00:15:59.293 "host": "nqn.2016-06.io.spdk:host1", 00:15:59.293 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:59.293 "psk": "/tmp/tmp.JgefRAlb9G" 00:15:59.293 } 00:15:59.293 }, 00:15:59.293 { 00:15:59.293 "method": "nvmf_subsystem_add_ns", 00:15:59.293 "params": { 00:15:59.293 "namespace": { 00:15:59.293 "bdev_name": "malloc0", 00:15:59.293 "nguid": "F658585040D9404A8F4976221A492722", 00:15:59.293 "no_auto_visible": false, 00:15:59.293 "nsid": 1, 00:15:59.293 "uuid": "f6585850-40d9-404a-8f49-76221a492722" 00:15:59.293 }, 00:15:59.293 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:15:59.293 } 00:15:59.293 }, 00:15:59.293 { 00:15:59.293 "method": "nvmf_subsystem_add_listener", 00:15:59.293 "params": { 00:15:59.293 "listen_address": { 00:15:59.293 "adrfam": "IPv4", 00:15:59.293 "traddr": "10.0.0.2", 00:15:59.293 "trsvcid": "4420", 00:15:59.293 "trtype": "TCP" 00:15:59.293 }, 00:15:59.293 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:59.293 "secure_channel": true 00:15:59.293 } 00:15:59.293 } 00:15:59.293 ] 00:15:59.293 } 00:15:59.293 ] 00:15:59.293 }' 00:15:59.293 16:01:52 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:59.553 16:01:53 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:15:59.553 "subsystems": [ 00:15:59.553 { 00:15:59.553 "subsystem": "keyring", 00:15:59.553 "config": [] 00:15:59.553 }, 00:15:59.553 { 00:15:59.553 "subsystem": "iobuf", 00:15:59.553 "config": [ 00:15:59.553 { 00:15:59.553 "method": "iobuf_set_options", 00:15:59.553 "params": { 00:15:59.553 "large_bufsize": 135168, 00:15:59.553 "large_pool_count": 1024, 00:15:59.553 "small_bufsize": 8192, 00:15:59.553 "small_pool_count": 8192 00:15:59.553 } 00:15:59.553 } 00:15:59.553 ] 00:15:59.553 }, 00:15:59.553 { 00:15:59.553 "subsystem": "sock", 00:15:59.553 "config": [ 00:15:59.553 { 00:15:59.553 "method": "sock_set_default_impl", 00:15:59.553 "params": { 00:15:59.553 "impl_name": "posix" 00:15:59.553 } 00:15:59.553 }, 00:15:59.553 { 00:15:59.553 "method": "sock_impl_set_options", 00:15:59.553 "params": { 00:15:59.553 "enable_ktls": false, 00:15:59.553 "enable_placement_id": 0, 00:15:59.553 "enable_quickack": false, 00:15:59.553 "enable_recv_pipe": true, 00:15:59.553 "enable_zerocopy_send_client": false, 00:15:59.553 "enable_zerocopy_send_server": true, 00:15:59.553 "impl_name": "ssl", 00:15:59.553 "recv_buf_size": 4096, 00:15:59.553 "send_buf_size": 4096, 00:15:59.553 "tls_version": 0, 00:15:59.553 "zerocopy_threshold": 0 00:15:59.553 } 00:15:59.553 }, 00:15:59.553 { 00:15:59.553 "method": "sock_impl_set_options", 00:15:59.553 "params": { 00:15:59.553 "enable_ktls": false, 00:15:59.553 "enable_placement_id": 0, 00:15:59.553 "enable_quickack": false, 00:15:59.553 "enable_recv_pipe": true, 00:15:59.553 "enable_zerocopy_send_client": false, 00:15:59.553 "enable_zerocopy_send_server": true, 00:15:59.553 "impl_name": "posix", 00:15:59.553 "recv_buf_size": 2097152, 00:15:59.554 "send_buf_size": 2097152, 00:15:59.554 "tls_version": 0, 00:15:59.554 "zerocopy_threshold": 0 00:15:59.554 } 00:15:59.554 } 00:15:59.554 ] 00:15:59.554 }, 00:15:59.554 { 00:15:59.554 "subsystem": "vmd", 00:15:59.554 "config": [] 00:15:59.554 }, 00:15:59.554 { 00:15:59.554 "subsystem": "accel", 00:15:59.554 "config": [ 00:15:59.554 { 00:15:59.554 "method": "accel_set_options", 00:15:59.554 "params": { 00:15:59.554 "buf_count": 2048, 00:15:59.554 "large_cache_size": 16, 00:15:59.554 "sequence_count": 2048, 00:15:59.554 "small_cache_size": 128, 00:15:59.554 "task_count": 2048 00:15:59.554 } 00:15:59.554 } 00:15:59.554 ] 00:15:59.554 }, 00:15:59.554 { 00:15:59.554 "subsystem": "bdev", 00:15:59.554 "config": [ 00:15:59.554 { 00:15:59.554 "method": "bdev_set_options", 00:15:59.554 "params": { 00:15:59.554 "bdev_auto_examine": true, 00:15:59.554 "bdev_io_cache_size": 256, 00:15:59.554 "bdev_io_pool_size": 65535, 00:15:59.554 "iobuf_large_cache_size": 16, 00:15:59.554 "iobuf_small_cache_size": 128 00:15:59.554 } 00:15:59.554 }, 00:15:59.554 { 00:15:59.554 "method": "bdev_raid_set_options", 00:15:59.554 "params": { 00:15:59.554 "process_window_size_kb": 1024 00:15:59.554 } 00:15:59.554 }, 00:15:59.554 { 00:15:59.554 "method": "bdev_iscsi_set_options", 00:15:59.554 "params": { 00:15:59.554 "timeout_sec": 30 00:15:59.554 } 00:15:59.554 }, 00:15:59.554 { 00:15:59.554 "method": "bdev_nvme_set_options", 00:15:59.554 "params": { 00:15:59.554 "action_on_timeout": "none", 00:15:59.554 "allow_accel_sequence": false, 00:15:59.554 "arbitration_burst": 0, 00:15:59.554 "bdev_retry_count": 3, 00:15:59.554 "ctrlr_loss_timeout_sec": 0, 00:15:59.554 "delay_cmd_submit": true, 00:15:59.554 "dhchap_dhgroups": [ 00:15:59.554 "null", 00:15:59.554 "ffdhe2048", 00:15:59.554 "ffdhe3072", 00:15:59.554 "ffdhe4096", 00:15:59.554 "ffdhe6144", 00:15:59.554 "ffdhe8192" 00:15:59.554 ], 00:15:59.554 "dhchap_digests": [ 00:15:59.554 "sha256", 00:15:59.554 "sha384", 00:15:59.554 "sha512" 00:15:59.554 ], 00:15:59.554 "disable_auto_failback": false, 00:15:59.554 "fast_io_fail_timeout_sec": 0, 00:15:59.554 "generate_uuids": false, 00:15:59.554 "high_priority_weight": 0, 00:15:59.554 "io_path_stat": false, 00:15:59.554 "io_queue_requests": 512, 00:15:59.554 "keep_alive_timeout_ms": 10000, 00:15:59.554 "low_priority_weight": 0, 00:15:59.554 "medium_priority_weight": 0, 00:15:59.554 "nvme_adminq_poll_period_us": 10000, 00:15:59.554 "nvme_error_stat": false, 00:15:59.554 "nvme_ioq_poll_period_us": 0, 00:15:59.554 "rdma_cm_event_timeout_ms": 0, 00:15:59.554 "rdma_max_cq_size": 0, 00:15:59.554 "rdma_srq_size": 0, 00:15:59.554 "reconnect_delay_sec": 0, 00:15:59.554 "timeout_admin_us": 0, 00:15:59.554 "timeout_us": 0, 00:15:59.554 "transport_ack_timeout": 0, 00:15:59.554 "transport_retry_count": 4, 00:15:59.554 "transport_tos": 0 00:15:59.554 } 00:15:59.554 }, 00:15:59.554 { 00:15:59.554 "method": "bdev_nvme_attach_controller", 00:15:59.554 "params": { 00:15:59.554 "adrfam": "IPv4", 00:15:59.554 "ctrlr_loss_timeout_sec": 0, 00:15:59.554 "ddgst": false, 00:15:59.554 "fast_io_fail_timeout_sec": 0, 00:15:59.554 "hdgst": false, 00:15:59.554 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:59.554 "name": "TLSTEST", 00:15:59.554 "prchk_guard": false, 00:15:59.554 "prchk_reftag": false, 00:15:59.554 "psk": "/tmp/tmp.JgefRAlb9G", 00:15:59.554 "reconnect_delay_sec": 0, 00:15:59.554 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:59.554 "traddr": "10.0.0.2", 00:15:59.554 "trsvcid": "4420", 00:15:59.554 "trtype": "TCP" 00:15:59.554 } 00:15:59.554 }, 00:15:59.554 { 00:15:59.554 "method": "bdev_nvme_set_hotplug", 00:15:59.554 "params": { 00:15:59.554 "enable": false, 00:15:59.554 "period_us": 100000 00:15:59.554 } 00:15:59.554 }, 00:15:59.554 { 00:15:59.554 "method": "bdev_wait_for_examine" 00:15:59.554 } 00:15:59.554 ] 00:15:59.554 }, 00:15:59.554 { 00:15:59.554 "subsystem": "nbd", 00:15:59.554 "config": [] 00:15:59.554 } 00:15:59.554 ] 00:15:59.554 }' 00:15:59.554 16:01:53 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 85081 00:15:59.554 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85081 ']' 00:15:59.554 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85081 00:15:59.554 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:59.554 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:59.554 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85081 00:15:59.554 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:59.554 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:59.554 killing process with pid 85081 00:15:59.554 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85081' 00:15:59.554 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85081 00:15:59.554 Received shutdown signal, test time was about 10.000000 seconds 00:15:59.554 00:15:59.554 Latency(us) 00:15:59.554 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.554 =================================================================================================================== 00:15:59.554 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:59.554 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85081 00:15:59.554 [2024-07-15 16:01:53.237378] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:59.812 16:01:53 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 84974 00:15:59.812 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84974 ']' 00:15:59.812 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84974 00:15:59.812 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:59.812 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:59.812 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84974 00:15:59.812 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:59.812 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:59.812 killing process with pid 84974 00:15:59.812 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84974' 00:15:59.812 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84974 00:15:59.812 [2024-07-15 16:01:53.501796] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:59.812 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84974 00:16:00.071 16:01:53 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:16:00.071 16:01:53 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:16:00.071 "subsystems": [ 00:16:00.071 { 00:16:00.071 "subsystem": "keyring", 00:16:00.071 "config": [] 00:16:00.071 }, 00:16:00.071 { 00:16:00.071 "subsystem": "iobuf", 00:16:00.071 "config": [ 00:16:00.071 { 00:16:00.071 "method": "iobuf_set_options", 00:16:00.071 "params": { 00:16:00.071 "large_bufsize": 135168, 00:16:00.071 "large_pool_count": 1024, 00:16:00.071 "small_bufsize": 8192, 00:16:00.071 "small_pool_count": 8192 00:16:00.071 } 00:16:00.071 } 00:16:00.071 ] 00:16:00.071 }, 00:16:00.071 { 00:16:00.071 "subsystem": "sock", 00:16:00.071 "config": [ 00:16:00.071 { 00:16:00.071 "method": "sock_set_default_impl", 00:16:00.071 "params": { 00:16:00.071 "impl_name": "posix" 00:16:00.071 } 00:16:00.071 }, 00:16:00.071 { 00:16:00.071 "method": "sock_impl_set_options", 00:16:00.071 "params": { 00:16:00.071 "enable_ktls": false, 00:16:00.071 "enable_placement_id": 0, 00:16:00.071 "enable_quickack": false, 00:16:00.071 "enable_recv_pipe": true, 00:16:00.071 "enable_zerocopy_send_client": false, 00:16:00.071 "enable_zerocopy_send_server": true, 00:16:00.071 "impl_name": "ssl", 00:16:00.071 "recv_buf_size": 4096, 00:16:00.071 "send_buf_size": 4096, 00:16:00.071 "tls_version": 0, 00:16:00.071 "zerocopy_threshold": 0 00:16:00.071 } 00:16:00.071 }, 00:16:00.071 { 00:16:00.071 "method": "sock_impl_set_options", 00:16:00.071 "params": { 00:16:00.071 "enable_ktls": false, 00:16:00.071 "enable_placement_id": 0, 00:16:00.071 "enable_quickack": false, 00:16:00.071 "enable_recv_pipe": true, 00:16:00.071 "enable_zerocopy_send_client": false, 00:16:00.071 "enable_zerocopy_send_server": true, 00:16:00.071 "impl_name": "posix", 00:16:00.071 "recv_buf_size": 2097152, 00:16:00.071 "send_buf_size": 2097152, 00:16:00.071 "tls_version": 0, 00:16:00.071 "zerocopy_threshold": 0 00:16:00.071 } 00:16:00.071 } 00:16:00.071 ] 00:16:00.071 }, 00:16:00.071 { 00:16:00.071 "subsystem": "vmd", 00:16:00.071 "config": [] 00:16:00.071 }, 00:16:00.071 { 00:16:00.071 "subsystem": "accel", 00:16:00.071 "config": [ 00:16:00.071 { 00:16:00.071 "method": "accel_set_options", 00:16:00.071 "params": { 00:16:00.071 "buf_count": 2048, 00:16:00.071 "large_cache_size": 16, 00:16:00.071 "sequence_count": 2048, 00:16:00.071 "small_cache_size": 128, 00:16:00.071 "task_count": 2048 00:16:00.071 } 00:16:00.071 } 00:16:00.071 ] 00:16:00.071 }, 00:16:00.071 { 00:16:00.071 "subsystem": "bdev", 00:16:00.071 "config": [ 00:16:00.071 { 00:16:00.071 "method": "bdev_set_options", 00:16:00.071 "params": { 00:16:00.071 "bdev_auto_examine": true, 00:16:00.071 "bdev_io_cache_size": 256, 00:16:00.071 "bdev_io_pool_size": 65535, 00:16:00.071 "iobuf_large_cache_size": 16, 00:16:00.071 "iobuf_small_cache_size": 128 00:16:00.071 } 00:16:00.071 }, 00:16:00.071 { 00:16:00.071 "method": "bdev_raid_set_options", 00:16:00.071 "params": { 00:16:00.071 "process_window_size_kb": 1024 00:16:00.071 } 00:16:00.071 }, 00:16:00.071 { 00:16:00.071 "method": "bdev_iscsi_set_options", 00:16:00.071 "params": { 00:16:00.071 "timeout_sec": 30 00:16:00.071 } 00:16:00.071 }, 00:16:00.071 { 00:16:00.071 "method": "bdev_nvme_set_options", 00:16:00.071 "params": { 00:16:00.071 "action_on_timeout": "none", 00:16:00.071 "allow_accel_sequence": false, 00:16:00.071 "arbitration_burst": 0, 00:16:00.071 "bdev_retry_count": 3, 00:16:00.071 "ctrlr_loss_timeout_sec": 0, 00:16:00.071 "delay_cmd_submit": true, 00:16:00.071 "dhchap_dhgroups": [ 00:16:00.071 "null", 00:16:00.071 "ffdhe2048", 00:16:00.071 "ffdhe3072", 00:16:00.071 "ffdhe4096", 00:16:00.071 "ffdhe6144", 00:16:00.071 "ffdhe8192" 00:16:00.071 ], 00:16:00.071 "dhchap_digests": [ 00:16:00.071 "sha256", 00:16:00.071 "sha384", 00:16:00.071 "sha512" 00:16:00.071 ], 00:16:00.071 "disable_auto_failback": false, 00:16:00.071 "fast_io_fail_timeout_sec": 0, 00:16:00.071 "generate_uuids": false, 00:16:00.071 "high_priority_weight": 0, 00:16:00.071 "io_path_stat": false, 00:16:00.071 "io_queue_requests": 0, 00:16:00.071 "keep_alive_timeout_ms": 10000, 00:16:00.071 "low_priority_weight": 0, 00:16:00.071 "medium_priority_weight": 0, 00:16:00.071 "nvme_adminq_poll_period_us": 10000, 00:16:00.071 "nvme_error_stat": false, 00:16:00.071 "nvme_ioq_poll_period_us": 0, 00:16:00.071 "rdma_cm_event_timeout_ms": 0, 00:16:00.071 "rdma_max_cq_size": 0, 00:16:00.071 "rdma_srq_size": 0, 00:16:00.071 "reconnect_delay_sec": 0, 00:16:00.071 "timeout_admin_us": 0, 00:16:00.071 "timeout_us": 0, 00:16:00.071 "transport_ack_timeout": 0, 00:16:00.071 "transport_retry_count": 4, 00:16:00.071 "transport_tos": 0 00:16:00.071 } 00:16:00.071 }, 00:16:00.071 { 00:16:00.071 "method": "bdev_nvme_set_hotplug", 00:16:00.071 "params": { 00:16:00.071 "enable": false, 00:16:00.071 "period_us": 100000 00:16:00.071 } 00:16:00.071 }, 00:16:00.071 { 00:16:00.071 "method": "bdev_malloc_create", 00:16:00.071 "params": { 00:16:00.071 "block_size": 4096, 00:16:00.071 "name": "malloc0", 00:16:00.071 "num_blocks": 8192, 00:16:00.071 "optimal_io_boundary": 0, 00:16:00.071 "physical_block_size": 4096, 00:16:00.071 "uuid": "f6585850-40d9-404a-8f49-76221a492722" 00:16:00.071 } 00:16:00.071 }, 00:16:00.071 { 00:16:00.071 "method": "bdev_wait_for_examine" 00:16:00.071 } 00:16:00.071 ] 00:16:00.071 }, 00:16:00.071 { 00:16:00.071 "subsystem": "nbd", 00:16:00.071 "config": [] 00:16:00.071 }, 00:16:00.071 { 00:16:00.071 "subsystem": "scheduler", 00:16:00.071 "config": [ 00:16:00.071 { 00:16:00.071 "method": "framework_set_scheduler", 00:16:00.071 "params": { 00:16:00.071 "name": "static" 00:16:00.071 } 00:16:00.071 } 00:16:00.071 ] 00:16:00.071 }, 00:16:00.071 { 00:16:00.071 "subsystem": "nvmf", 00:16:00.071 "config": [ 00:16:00.071 { 00:16:00.071 "method": "nvmf_set_config", 00:16:00.071 "params": { 00:16:00.071 "admin_cmd_passthru": { 00:16:00.071 "identify_ctrlr": false 00:16:00.071 }, 00:16:00.071 "discovery_filter": "match_any" 00:16:00.071 } 00:16:00.071 }, 00:16:00.071 { 00:16:00.071 "method": "nvmf_set_max_subsystems", 00:16:00.071 "params": { 00:16:00.071 "max_subsystems": 1024 00:16:00.071 } 00:16:00.071 }, 00:16:00.071 { 00:16:00.071 "method": "nvmf_set_crdt", 00:16:00.071 "params": { 00:16:00.071 "crdt1": 0, 00:16:00.071 "crdt2": 0, 00:16:00.071 "crdt3": 0 00:16:00.071 } 00:16:00.071 }, 00:16:00.071 { 00:16:00.071 "method": "nvmf_create_transport", 00:16:00.071 "params": { 00:16:00.071 "abort_timeout_sec": 1, 00:16:00.071 "ack_timeout": 0, 00:16:00.071 "buf_cache_size": 4294967295, 00:16:00.071 "c2h_success": false, 00:16:00.071 "data_wr_pool_size": 0, 00:16:00.071 "dif_insert_or_strip": false, 00:16:00.071 "in_capsule_data_size": 4096, 00:16:00.071 "io_unit_size": 131072, 00:16:00.071 "max_aq_depth": 128, 00:16:00.071 "max_io_qpairs_per_ctrlr": 127, 00:16:00.071 "max_io_size": 131072, 00:16:00.071 "max_queue_depth": 128, 00:16:00.071 "num_shared_buffers": 511, 00:16:00.071 "sock_priority": 0, 00:16:00.071 "trtype": "TCP", 00:16:00.071 "zcopy": false 00:16:00.071 } 00:16:00.071 }, 00:16:00.071 { 00:16:00.071 "method": "nvmf_create_subsystem", 00:16:00.071 "params": { 00:16:00.071 "allow_any_host": false, 00:16:00.071 "ana_reporting": false, 00:16:00.071 "max_cntlid": 65519, 00:16:00.071 "max_namespaces": 10, 00:16:00.071 "min_cntlid": 1, 00:16:00.071 "model_number": "SPDK bdev Controller", 00:16:00.071 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:00.072 "serial_number": "SPDK00000000000001" 00:16:00.072 } 00:16:00.072 }, 00:16:00.072 { 00:16:00.072 "method": "nvmf_subsystem_add_host", 00:16:00.072 "params": { 00:16:00.072 "host": "nqn.2016-06.io.spdk:host1", 00:16:00.072 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:00.072 "psk": "/tmp/tmp.JgefRAlb9G" 00:16:00.072 } 00:16:00.072 }, 00:16:00.072 { 00:16:00.072 "method": "nvmf_subsystem_add_ns", 00:16:00.072 "params": { 00:16:00.072 "namespace": { 00:16:00.072 "bdev_name": "malloc0", 00:16:00.072 "nguid": "F658585040D9404A8F4976221A492722", 00:16:00.072 "no_auto_visible": false, 00:16:00.072 "nsid": 1, 00:16:00.072 "uuid": "f6585850-40d9-404a-8f49-76221a492722" 00:16:00.072 }, 00:16:00.072 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:00.072 } 00:16:00.072 }, 00:16:00.072 { 00:16:00.072 "method": "nvmf_subsystem_add_listener", 00:16:00.072 "params": { 00:16:00.072 "listen_address": { 00:16:00.072 "adrfam": "IPv4", 00:16:00.072 "traddr": "10.0.0.2", 00:16:00.072 "trsvcid": "4420", 00:16:00.072 "trtype": "TCP" 00:16:00.072 }, 00:16:00.072 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:00.072 "secure_channel": true 00:16:00.072 } 00:16:00.072 } 00:16:00.072 ] 00:16:00.072 } 00:16:00.072 ] 00:16:00.072 }' 00:16:00.072 16:01:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:00.072 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:00.072 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:00.072 16:01:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85161 00:16:00.072 16:01:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85161 00:16:00.072 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85161 ']' 00:16:00.072 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.072 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:00.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.072 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.072 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:00.072 16:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:00.072 16:01:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:16:00.330 [2024-07-15 16:01:53.824241] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:16:00.330 [2024-07-15 16:01:53.824341] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:00.330 [2024-07-15 16:01:53.958903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.588 [2024-07-15 16:01:54.095254] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:00.588 [2024-07-15 16:01:54.095317] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:00.588 [2024-07-15 16:01:54.095329] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:00.588 [2024-07-15 16:01:54.095337] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:00.588 [2024-07-15 16:01:54.095353] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:00.588 [2024-07-15 16:01:54.095457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:00.847 [2024-07-15 16:01:54.326189] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:00.847 [2024-07-15 16:01:54.342105] tcp.c:3710:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:00.847 [2024-07-15 16:01:54.358093] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:00.847 [2024-07-15 16:01:54.358331] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:01.106 16:01:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:01.106 16:01:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:01.106 16:01:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:01.106 16:01:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:01.106 16:01:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:01.106 16:01:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:01.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:01.106 16:01:54 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=85205 00:16:01.106 16:01:54 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 85205 /var/tmp/bdevperf.sock 00:16:01.106 16:01:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85205 ']' 00:16:01.106 16:01:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:01.106 16:01:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:01.106 16:01:54 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:16:01.106 16:01:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:01.106 16:01:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:01.106 16:01:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:01.106 16:01:54 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:16:01.106 "subsystems": [ 00:16:01.106 { 00:16:01.106 "subsystem": "keyring", 00:16:01.106 "config": [] 00:16:01.106 }, 00:16:01.106 { 00:16:01.106 "subsystem": "iobuf", 00:16:01.106 "config": [ 00:16:01.106 { 00:16:01.106 "method": "iobuf_set_options", 00:16:01.106 "params": { 00:16:01.106 "large_bufsize": 135168, 00:16:01.106 "large_pool_count": 1024, 00:16:01.106 "small_bufsize": 8192, 00:16:01.106 "small_pool_count": 8192 00:16:01.106 } 00:16:01.106 } 00:16:01.106 ] 00:16:01.106 }, 00:16:01.106 { 00:16:01.106 "subsystem": "sock", 00:16:01.106 "config": [ 00:16:01.106 { 00:16:01.106 "method": "sock_set_default_impl", 00:16:01.106 "params": { 00:16:01.106 "impl_name": "posix" 00:16:01.106 } 00:16:01.106 }, 00:16:01.106 { 00:16:01.106 "method": "sock_impl_set_options", 00:16:01.106 "params": { 00:16:01.106 "enable_ktls": false, 00:16:01.106 "enable_placement_id": 0, 00:16:01.106 "enable_quickack": false, 00:16:01.106 "enable_recv_pipe": true, 00:16:01.106 "enable_zerocopy_send_client": false, 00:16:01.106 "enable_zerocopy_send_server": true, 00:16:01.106 "impl_name": "ssl", 00:16:01.106 "recv_buf_size": 4096, 00:16:01.106 "send_buf_size": 4096, 00:16:01.106 "tls_version": 0, 00:16:01.106 "zerocopy_threshold": 0 00:16:01.106 } 00:16:01.106 }, 00:16:01.106 { 00:16:01.106 "method": "sock_impl_set_options", 00:16:01.106 "params": { 00:16:01.106 "enable_ktls": false, 00:16:01.106 "enable_placement_id": 0, 00:16:01.106 "enable_quickack": false, 00:16:01.106 "enable_recv_pipe": true, 00:16:01.106 "enable_zerocopy_send_client": false, 00:16:01.106 "enable_zerocopy_send_server": true, 00:16:01.106 "impl_name": "posix", 00:16:01.106 "recv_buf_size": 2097152, 00:16:01.106 "send_buf_size": 2097152, 00:16:01.106 "tls_version": 0, 00:16:01.107 "zerocopy_threshold": 0 00:16:01.107 } 00:16:01.107 } 00:16:01.107 ] 00:16:01.107 }, 00:16:01.107 { 00:16:01.107 "subsystem": "vmd", 00:16:01.107 "config": [] 00:16:01.107 }, 00:16:01.107 { 00:16:01.107 "subsystem": "accel", 00:16:01.107 "config": [ 00:16:01.107 { 00:16:01.107 "method": "accel_set_options", 00:16:01.107 "params": { 00:16:01.107 "buf_count": 2048, 00:16:01.107 "large_cache_size": 16, 00:16:01.107 "sequence_count": 2048, 00:16:01.107 "small_cache_size": 128, 00:16:01.107 "task_count": 2048 00:16:01.107 } 00:16:01.107 } 00:16:01.107 ] 00:16:01.107 }, 00:16:01.107 { 00:16:01.107 "subsystem": "bdev", 00:16:01.107 "config": [ 00:16:01.107 { 00:16:01.107 "method": "bdev_set_options", 00:16:01.107 "params": { 00:16:01.107 "bdev_auto_examine": true, 00:16:01.107 "bdev_io_cache_size": 256, 00:16:01.107 "bdev_io_pool_size": 65535, 00:16:01.107 "iobuf_large_cache_size": 16, 00:16:01.107 "iobuf_small_cache_size": 128 00:16:01.107 } 00:16:01.107 }, 00:16:01.107 { 00:16:01.107 "method": "bdev_raid_set_options", 00:16:01.107 "params": { 00:16:01.107 "process_window_size_kb": 1024 00:16:01.107 } 00:16:01.107 }, 00:16:01.107 { 00:16:01.107 "method": "bdev_iscsi_set_options", 00:16:01.107 "params": { 00:16:01.107 "timeout_sec": 30 00:16:01.107 } 00:16:01.107 }, 00:16:01.107 { 00:16:01.107 "method": "bdev_nvme_set_options", 00:16:01.107 "params": { 00:16:01.107 "action_on_timeout": "none", 00:16:01.107 "allow_accel_sequence": false, 00:16:01.107 "arbitration_burst": 0, 00:16:01.107 "bdev_retry_count": 3, 00:16:01.107 "ctrlr_loss_timeout_sec": 0, 00:16:01.107 "delay_cmd_submit": true, 00:16:01.107 "dhchap_dhgroups": [ 00:16:01.107 "null", 00:16:01.107 "ffdhe2048", 00:16:01.107 "ffdhe3072", 00:16:01.107 "ffdhe4096", 00:16:01.107 "ffdhe6144", 00:16:01.107 "ffdhe8192" 00:16:01.107 ], 00:16:01.107 "dhchap_digests": [ 00:16:01.107 "sha256", 00:16:01.107 "sha384", 00:16:01.107 "sha512" 00:16:01.107 ], 00:16:01.107 "disable_auto_failback": false, 00:16:01.107 "fast_io_fail_timeout_sec": 0, 00:16:01.107 "generate_uuids": false, 00:16:01.107 "high_priority_weight": 0, 00:16:01.107 "io_path_stat": false, 00:16:01.107 "io_queue_requests": 512, 00:16:01.107 "keep_alive_timeout_ms": 10000, 00:16:01.107 "low_priority_weight": 0, 00:16:01.107 "medium_priority_weight": 0, 00:16:01.107 "nvme_adminq_poll_period_us": 10000, 00:16:01.107 "nvme_error_stat": false, 00:16:01.107 "nvme_ioq_poll_period_us": 0, 00:16:01.107 "rdma_cm_event_timeout_ms": 0, 00:16:01.107 "rdma_max_cq_size": 0, 00:16:01.107 "rdma_srq_size": 0, 00:16:01.107 "reconnect_delay_sec": 0, 00:16:01.107 "timeout_admin_us": 0, 00:16:01.107 "timeout_us": 0, 00:16:01.107 "transport_ack_timeout": 0, 00:16:01.107 "transport_retry_count": 4, 00:16:01.107 "transport_tos": 0 00:16:01.107 } 00:16:01.107 }, 00:16:01.107 { 00:16:01.107 "method": "bdev_nvme_attach_controller", 00:16:01.107 "params": { 00:16:01.107 "adrfam": "IPv4", 00:16:01.107 "ctrlr_loss_timeout_sec": 0, 00:16:01.107 "ddgst": false, 00:16:01.107 "fast_io_fail_timeout_sec": 0, 00:16:01.107 "hdgst": false, 00:16:01.107 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:01.107 "name": "TLSTEST", 00:16:01.107 "prchk_guard": false, 00:16:01.107 "prchk_reftag": false, 00:16:01.107 "psk": "/tmp/tmp.JgefRAlb9G", 00:16:01.107 "reconnect_delay_sec": 0, 00:16:01.107 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:01.107 "traddr": "10.0.0.2", 00:16:01.107 "trsvcid": "4420", 00:16:01.107 "trtype": "TCP" 00:16:01.107 } 00:16:01.107 }, 00:16:01.107 { 00:16:01.107 "method": "bdev_nvme_set_hotplug", 00:16:01.107 "params": { 00:16:01.107 "enable": false, 00:16:01.107 "period_us": 100000 00:16:01.107 } 00:16:01.107 }, 00:16:01.107 { 00:16:01.107 "method": "bdev_wait_for_examine" 00:16:01.107 } 00:16:01.107 ] 00:16:01.107 }, 00:16:01.107 { 00:16:01.107 "subsystem": "nbd", 00:16:01.107 "config": [] 00:16:01.107 } 00:16:01.107 ] 00:16:01.107 }' 00:16:01.365 [2024-07-15 16:01:54.871374] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:16:01.366 [2024-07-15 16:01:54.871516] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85205 ] 00:16:01.366 [2024-07-15 16:01:55.016135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.624 [2024-07-15 16:01:55.140082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:01.624 [2024-07-15 16:01:55.307885] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:01.624 [2024-07-15 16:01:55.308014] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:02.190 16:01:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:02.190 16:01:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:02.190 16:01:55 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:02.448 Running I/O for 10 seconds... 00:16:12.444 00:16:12.444 Latency(us) 00:16:12.444 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:12.444 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:12.444 Verification LBA range: start 0x0 length 0x2000 00:16:12.444 TLSTESTn1 : 10.02 3911.54 15.28 0.00 0.00 32656.34 7328.12 32648.84 00:16:12.444 =================================================================================================================== 00:16:12.444 Total : 3911.54 15.28 0.00 0.00 32656.34 7328.12 32648.84 00:16:12.444 0 00:16:12.444 16:02:06 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:12.444 16:02:06 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 85205 00:16:12.444 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85205 ']' 00:16:12.444 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85205 00:16:12.444 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:12.444 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:12.444 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85205 00:16:12.444 killing process with pid 85205 00:16:12.444 Received shutdown signal, test time was about 10.000000 seconds 00:16:12.444 00:16:12.444 Latency(us) 00:16:12.444 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:12.444 =================================================================================================================== 00:16:12.444 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:12.444 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:12.444 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:12.444 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85205' 00:16:12.444 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85205 00:16:12.444 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85205 00:16:12.444 [2024-07-15 16:02:06.048689] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:12.725 16:02:06 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 85161 00:16:12.725 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85161 ']' 00:16:12.725 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85161 00:16:12.725 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:12.725 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:12.725 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85161 00:16:12.725 killing process with pid 85161 00:16:12.725 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:12.725 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:12.725 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85161' 00:16:12.725 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85161 00:16:12.725 [2024-07-15 16:02:06.427795] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:12.725 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85161 00:16:13.004 16:02:06 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:16:13.005 16:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:13.005 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:13.005 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:13.005 16:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85356 00:16:13.005 16:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85356 00:16:13.005 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85356 ']' 00:16:13.005 16:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:13.005 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.005 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:13.005 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.005 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:13.005 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:13.263 [2024-07-15 16:02:06.740728] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:16:13.263 [2024-07-15 16:02:06.740854] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:13.263 [2024-07-15 16:02:06.882411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.520 [2024-07-15 16:02:07.010211] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:13.520 [2024-07-15 16:02:07.010263] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:13.520 [2024-07-15 16:02:07.010276] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:13.520 [2024-07-15 16:02:07.010285] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:13.520 [2024-07-15 16:02:07.010292] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:13.520 [2024-07-15 16:02:07.010323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.086 16:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:14.086 16:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:14.086 16:02:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:14.086 16:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:14.086 16:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:14.344 16:02:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:14.344 16:02:07 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.JgefRAlb9G 00:16:14.344 16:02:07 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.JgefRAlb9G 00:16:14.344 16:02:07 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:14.601 [2024-07-15 16:02:08.098485] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:14.601 16:02:08 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:14.857 16:02:08 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:14.857 [2024-07-15 16:02:08.578610] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:14.857 [2024-07-15 16:02:08.578844] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:15.115 16:02:08 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:15.115 malloc0 00:16:15.374 16:02:08 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:15.374 16:02:09 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JgefRAlb9G 00:16:15.631 [2024-07-15 16:02:09.310471] tcp.c:3710:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:15.631 16:02:09 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=85459 00:16:15.631 16:02:09 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:15.631 16:02:09 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:15.631 16:02:09 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 85459 /var/tmp/bdevperf.sock 00:16:15.631 16:02:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85459 ']' 00:16:15.631 16:02:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:15.631 16:02:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:15.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:15.631 16:02:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:15.631 16:02:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:15.631 16:02:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:15.889 [2024-07-15 16:02:09.382703] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:16:15.889 [2024-07-15 16:02:09.382798] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85459 ] 00:16:15.889 [2024-07-15 16:02:09.522855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.146 [2024-07-15 16:02:09.653567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:16.711 16:02:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:16.711 16:02:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:16.712 16:02:10 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JgefRAlb9G 00:16:17.013 16:02:10 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:17.271 [2024-07-15 16:02:10.790357] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:17.271 nvme0n1 00:16:17.271 16:02:10 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:17.271 Running I/O for 1 seconds... 00:16:18.647 00:16:18.647 Latency(us) 00:16:18.647 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:18.647 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:18.647 Verification LBA range: start 0x0 length 0x2000 00:16:18.647 nvme0n1 : 1.02 3902.75 15.25 0.00 0.00 32441.38 7179.17 23235.49 00:16:18.647 =================================================================================================================== 00:16:18.647 Total : 3902.75 15.25 0.00 0.00 32441.38 7179.17 23235.49 00:16:18.647 0 00:16:18.647 16:02:12 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 85459 00:16:18.647 16:02:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85459 ']' 00:16:18.647 16:02:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85459 00:16:18.647 16:02:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:18.647 16:02:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:18.647 16:02:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85459 00:16:18.647 16:02:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:18.647 killing process with pid 85459 00:16:18.647 16:02:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:18.647 16:02:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85459' 00:16:18.647 Received shutdown signal, test time was about 1.000000 seconds 00:16:18.647 00:16:18.647 Latency(us) 00:16:18.647 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:18.647 =================================================================================================================== 00:16:18.647 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:18.647 16:02:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85459 00:16:18.647 16:02:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85459 00:16:18.647 16:02:12 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 85356 00:16:18.647 16:02:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85356 ']' 00:16:18.647 16:02:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85356 00:16:18.647 16:02:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:18.647 16:02:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:18.647 16:02:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85356 00:16:18.647 16:02:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:18.647 16:02:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:18.647 16:02:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85356' 00:16:18.647 killing process with pid 85356 00:16:18.647 16:02:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85356 00:16:18.647 [2024-07-15 16:02:12.292401] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:18.647 16:02:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85356 00:16:18.907 16:02:12 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:16:18.907 16:02:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:18.907 16:02:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:18.907 16:02:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:18.907 16:02:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85534 00:16:18.907 16:02:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:18.907 16:02:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85534 00:16:18.907 16:02:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85534 ']' 00:16:18.907 16:02:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.907 16:02:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:18.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.907 16:02:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.907 16:02:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:18.907 16:02:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:18.907 [2024-07-15 16:02:12.594981] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:16:18.907 [2024-07-15 16:02:12.595120] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:19.165 [2024-07-15 16:02:12.731205] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.165 [2024-07-15 16:02:12.847347] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:19.165 [2024-07-15 16:02:12.847448] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:19.165 [2024-07-15 16:02:12.847475] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:19.165 [2024-07-15 16:02:12.847498] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:19.165 [2024-07-15 16:02:12.847505] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:19.165 [2024-07-15 16:02:12.847546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.119 16:02:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:20.119 16:02:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:20.119 16:02:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:20.119 16:02:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:20.119 16:02:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:20.119 16:02:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:20.119 16:02:13 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:16:20.119 16:02:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.119 16:02:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:20.119 [2024-07-15 16:02:13.598544] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:20.119 malloc0 00:16:20.119 [2024-07-15 16:02:13.630681] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:20.119 [2024-07-15 16:02:13.630891] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:20.119 16:02:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.119 16:02:13 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=85584 00:16:20.119 16:02:13 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:20.119 16:02:13 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 85584 /var/tmp/bdevperf.sock 00:16:20.119 16:02:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85584 ']' 00:16:20.119 16:02:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:20.119 16:02:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:20.119 16:02:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:20.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:20.119 16:02:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:20.119 16:02:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:20.119 [2024-07-15 16:02:13.717944] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:16:20.119 [2024-07-15 16:02:13.718045] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85584 ] 00:16:20.378 [2024-07-15 16:02:13.857085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.378 [2024-07-15 16:02:13.986366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:21.313 16:02:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:21.313 16:02:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:21.313 16:02:14 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JgefRAlb9G 00:16:21.570 16:02:15 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:21.570 [2024-07-15 16:02:15.264731] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:21.826 nvme0n1 00:16:21.826 16:02:15 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:21.826 Running I/O for 1 seconds... 00:16:22.758 00:16:22.758 Latency(us) 00:16:22.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:22.759 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:22.759 Verification LBA range: start 0x0 length 0x2000 00:16:22.759 nvme0n1 : 1.03 3833.08 14.97 0.00 0.00 32957.49 9592.09 22043.93 00:16:22.759 =================================================================================================================== 00:16:22.759 Total : 3833.08 14.97 0.00 0.00 32957.49 9592.09 22043.93 00:16:22.759 0 00:16:23.017 16:02:16 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:16:23.017 16:02:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.017 16:02:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:23.017 16:02:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.017 16:02:16 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:16:23.017 "subsystems": [ 00:16:23.017 { 00:16:23.017 "subsystem": "keyring", 00:16:23.017 "config": [ 00:16:23.017 { 00:16:23.017 "method": "keyring_file_add_key", 00:16:23.017 "params": { 00:16:23.017 "name": "key0", 00:16:23.017 "path": "/tmp/tmp.JgefRAlb9G" 00:16:23.017 } 00:16:23.017 } 00:16:23.017 ] 00:16:23.017 }, 00:16:23.017 { 00:16:23.017 "subsystem": "iobuf", 00:16:23.017 "config": [ 00:16:23.017 { 00:16:23.017 "method": "iobuf_set_options", 00:16:23.017 "params": { 00:16:23.017 "large_bufsize": 135168, 00:16:23.017 "large_pool_count": 1024, 00:16:23.017 "small_bufsize": 8192, 00:16:23.017 "small_pool_count": 8192 00:16:23.017 } 00:16:23.017 } 00:16:23.017 ] 00:16:23.017 }, 00:16:23.017 { 00:16:23.017 "subsystem": "sock", 00:16:23.017 "config": [ 00:16:23.017 { 00:16:23.017 "method": "sock_set_default_impl", 00:16:23.017 "params": { 00:16:23.017 "impl_name": "posix" 00:16:23.017 } 00:16:23.017 }, 00:16:23.017 { 00:16:23.017 "method": "sock_impl_set_options", 00:16:23.017 "params": { 00:16:23.017 "enable_ktls": false, 00:16:23.017 "enable_placement_id": 0, 00:16:23.017 "enable_quickack": false, 00:16:23.017 "enable_recv_pipe": true, 00:16:23.017 "enable_zerocopy_send_client": false, 00:16:23.017 "enable_zerocopy_send_server": true, 00:16:23.017 "impl_name": "ssl", 00:16:23.017 "recv_buf_size": 4096, 00:16:23.017 "send_buf_size": 4096, 00:16:23.017 "tls_version": 0, 00:16:23.017 "zerocopy_threshold": 0 00:16:23.017 } 00:16:23.017 }, 00:16:23.017 { 00:16:23.017 "method": "sock_impl_set_options", 00:16:23.017 "params": { 00:16:23.017 "enable_ktls": false, 00:16:23.017 "enable_placement_id": 0, 00:16:23.017 "enable_quickack": false, 00:16:23.017 "enable_recv_pipe": true, 00:16:23.017 "enable_zerocopy_send_client": false, 00:16:23.017 "enable_zerocopy_send_server": true, 00:16:23.017 "impl_name": "posix", 00:16:23.017 "recv_buf_size": 2097152, 00:16:23.017 "send_buf_size": 2097152, 00:16:23.017 "tls_version": 0, 00:16:23.017 "zerocopy_threshold": 0 00:16:23.017 } 00:16:23.017 } 00:16:23.017 ] 00:16:23.017 }, 00:16:23.017 { 00:16:23.017 "subsystem": "vmd", 00:16:23.017 "config": [] 00:16:23.017 }, 00:16:23.017 { 00:16:23.017 "subsystem": "accel", 00:16:23.017 "config": [ 00:16:23.017 { 00:16:23.017 "method": "accel_set_options", 00:16:23.017 "params": { 00:16:23.017 "buf_count": 2048, 00:16:23.017 "large_cache_size": 16, 00:16:23.017 "sequence_count": 2048, 00:16:23.017 "small_cache_size": 128, 00:16:23.017 "task_count": 2048 00:16:23.017 } 00:16:23.017 } 00:16:23.017 ] 00:16:23.017 }, 00:16:23.017 { 00:16:23.017 "subsystem": "bdev", 00:16:23.017 "config": [ 00:16:23.017 { 00:16:23.017 "method": "bdev_set_options", 00:16:23.017 "params": { 00:16:23.017 "bdev_auto_examine": true, 00:16:23.017 "bdev_io_cache_size": 256, 00:16:23.017 "bdev_io_pool_size": 65535, 00:16:23.017 "iobuf_large_cache_size": 16, 00:16:23.017 "iobuf_small_cache_size": 128 00:16:23.017 } 00:16:23.017 }, 00:16:23.017 { 00:16:23.017 "method": "bdev_raid_set_options", 00:16:23.017 "params": { 00:16:23.017 "process_window_size_kb": 1024 00:16:23.017 } 00:16:23.017 }, 00:16:23.017 { 00:16:23.017 "method": "bdev_iscsi_set_options", 00:16:23.017 "params": { 00:16:23.017 "timeout_sec": 30 00:16:23.017 } 00:16:23.017 }, 00:16:23.017 { 00:16:23.017 "method": "bdev_nvme_set_options", 00:16:23.017 "params": { 00:16:23.017 "action_on_timeout": "none", 00:16:23.017 "allow_accel_sequence": false, 00:16:23.017 "arbitration_burst": 0, 00:16:23.017 "bdev_retry_count": 3, 00:16:23.017 "ctrlr_loss_timeout_sec": 0, 00:16:23.017 "delay_cmd_submit": true, 00:16:23.017 "dhchap_dhgroups": [ 00:16:23.017 "null", 00:16:23.017 "ffdhe2048", 00:16:23.017 "ffdhe3072", 00:16:23.017 "ffdhe4096", 00:16:23.017 "ffdhe6144", 00:16:23.017 "ffdhe8192" 00:16:23.017 ], 00:16:23.017 "dhchap_digests": [ 00:16:23.017 "sha256", 00:16:23.017 "sha384", 00:16:23.017 "sha512" 00:16:23.017 ], 00:16:23.017 "disable_auto_failback": false, 00:16:23.017 "fast_io_fail_timeout_sec": 0, 00:16:23.017 "generate_uuids": false, 00:16:23.017 "high_priority_weight": 0, 00:16:23.017 "io_path_stat": false, 00:16:23.017 "io_queue_requests": 0, 00:16:23.017 "keep_alive_timeout_ms": 10000, 00:16:23.017 "low_priority_weight": 0, 00:16:23.017 "medium_priority_weight": 0, 00:16:23.017 "nvme_adminq_poll_period_us": 10000, 00:16:23.017 "nvme_error_stat": false, 00:16:23.017 "nvme_ioq_poll_period_us": 0, 00:16:23.017 "rdma_cm_event_timeout_ms": 0, 00:16:23.017 "rdma_max_cq_size": 0, 00:16:23.017 "rdma_srq_size": 0, 00:16:23.017 "reconnect_delay_sec": 0, 00:16:23.017 "timeout_admin_us": 0, 00:16:23.017 "timeout_us": 0, 00:16:23.017 "transport_ack_timeout": 0, 00:16:23.017 "transport_retry_count": 4, 00:16:23.017 "transport_tos": 0 00:16:23.017 } 00:16:23.017 }, 00:16:23.017 { 00:16:23.017 "method": "bdev_nvme_set_hotplug", 00:16:23.017 "params": { 00:16:23.017 "enable": false, 00:16:23.017 "period_us": 100000 00:16:23.017 } 00:16:23.017 }, 00:16:23.017 { 00:16:23.017 "method": "bdev_malloc_create", 00:16:23.017 "params": { 00:16:23.017 "block_size": 4096, 00:16:23.017 "name": "malloc0", 00:16:23.017 "num_blocks": 8192, 00:16:23.017 "optimal_io_boundary": 0, 00:16:23.017 "physical_block_size": 4096, 00:16:23.017 "uuid": "8d0c6577-6295-45a7-9f74-29d7e967c2d6" 00:16:23.017 } 00:16:23.017 }, 00:16:23.017 { 00:16:23.017 "method": "bdev_wait_for_examine" 00:16:23.017 } 00:16:23.017 ] 00:16:23.017 }, 00:16:23.017 { 00:16:23.017 "subsystem": "nbd", 00:16:23.017 "config": [] 00:16:23.017 }, 00:16:23.017 { 00:16:23.017 "subsystem": "scheduler", 00:16:23.017 "config": [ 00:16:23.017 { 00:16:23.017 "method": "framework_set_scheduler", 00:16:23.017 "params": { 00:16:23.017 "name": "static" 00:16:23.017 } 00:16:23.017 } 00:16:23.017 ] 00:16:23.017 }, 00:16:23.017 { 00:16:23.017 "subsystem": "nvmf", 00:16:23.017 "config": [ 00:16:23.017 { 00:16:23.017 "method": "nvmf_set_config", 00:16:23.017 "params": { 00:16:23.017 "admin_cmd_passthru": { 00:16:23.017 "identify_ctrlr": false 00:16:23.017 }, 00:16:23.017 "discovery_filter": "match_any" 00:16:23.017 } 00:16:23.017 }, 00:16:23.017 { 00:16:23.017 "method": "nvmf_set_max_subsystems", 00:16:23.017 "params": { 00:16:23.017 "max_subsystems": 1024 00:16:23.017 } 00:16:23.017 }, 00:16:23.017 { 00:16:23.017 "method": "nvmf_set_crdt", 00:16:23.017 "params": { 00:16:23.017 "crdt1": 0, 00:16:23.017 "crdt2": 0, 00:16:23.017 "crdt3": 0 00:16:23.017 } 00:16:23.017 }, 00:16:23.017 { 00:16:23.017 "method": "nvmf_create_transport", 00:16:23.017 "params": { 00:16:23.017 "abort_timeout_sec": 1, 00:16:23.017 "ack_timeout": 0, 00:16:23.017 "buf_cache_size": 4294967295, 00:16:23.017 "c2h_success": false, 00:16:23.017 "data_wr_pool_size": 0, 00:16:23.017 "dif_insert_or_strip": false, 00:16:23.017 "in_capsule_data_size": 4096, 00:16:23.017 "io_unit_size": 131072, 00:16:23.017 "max_aq_depth": 128, 00:16:23.017 "max_io_qpairs_per_ctrlr": 127, 00:16:23.017 "max_io_size": 131072, 00:16:23.017 "max_queue_depth": 128, 00:16:23.017 "num_shared_buffers": 511, 00:16:23.017 "sock_priority": 0, 00:16:23.017 "trtype": "TCP", 00:16:23.017 "zcopy": false 00:16:23.017 } 00:16:23.017 }, 00:16:23.017 { 00:16:23.017 "method": "nvmf_create_subsystem", 00:16:23.017 "params": { 00:16:23.017 "allow_any_host": false, 00:16:23.017 "ana_reporting": false, 00:16:23.017 "max_cntlid": 65519, 00:16:23.017 "max_namespaces": 32, 00:16:23.017 "min_cntlid": 1, 00:16:23.017 "model_number": "SPDK bdev Controller", 00:16:23.017 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:23.017 "serial_number": "00000000000000000000" 00:16:23.017 } 00:16:23.017 }, 00:16:23.017 { 00:16:23.017 "method": "nvmf_subsystem_add_host", 00:16:23.017 "params": { 00:16:23.017 "host": "nqn.2016-06.io.spdk:host1", 00:16:23.017 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:23.017 "psk": "key0" 00:16:23.017 } 00:16:23.017 }, 00:16:23.017 { 00:16:23.017 "method": "nvmf_subsystem_add_ns", 00:16:23.017 "params": { 00:16:23.017 "namespace": { 00:16:23.017 "bdev_name": "malloc0", 00:16:23.017 "nguid": "8D0C6577629545A79F7429D7E967C2D6", 00:16:23.017 "no_auto_visible": false, 00:16:23.017 "nsid": 1, 00:16:23.017 "uuid": "8d0c6577-6295-45a7-9f74-29d7e967c2d6" 00:16:23.017 }, 00:16:23.017 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:23.017 } 00:16:23.017 }, 00:16:23.017 { 00:16:23.017 "method": "nvmf_subsystem_add_listener", 00:16:23.017 "params": { 00:16:23.017 "listen_address": { 00:16:23.017 "adrfam": "IPv4", 00:16:23.017 "traddr": "10.0.0.2", 00:16:23.017 "trsvcid": "4420", 00:16:23.017 "trtype": "TCP" 00:16:23.017 }, 00:16:23.017 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:23.017 "secure_channel": true 00:16:23.017 } 00:16:23.017 } 00:16:23.017 ] 00:16:23.017 } 00:16:23.017 ] 00:16:23.017 }' 00:16:23.017 16:02:16 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:23.275 16:02:16 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:16:23.275 "subsystems": [ 00:16:23.275 { 00:16:23.275 "subsystem": "keyring", 00:16:23.275 "config": [ 00:16:23.275 { 00:16:23.275 "method": "keyring_file_add_key", 00:16:23.275 "params": { 00:16:23.275 "name": "key0", 00:16:23.275 "path": "/tmp/tmp.JgefRAlb9G" 00:16:23.275 } 00:16:23.275 } 00:16:23.275 ] 00:16:23.275 }, 00:16:23.275 { 00:16:23.275 "subsystem": "iobuf", 00:16:23.275 "config": [ 00:16:23.275 { 00:16:23.275 "method": "iobuf_set_options", 00:16:23.275 "params": { 00:16:23.275 "large_bufsize": 135168, 00:16:23.275 "large_pool_count": 1024, 00:16:23.275 "small_bufsize": 8192, 00:16:23.275 "small_pool_count": 8192 00:16:23.275 } 00:16:23.275 } 00:16:23.275 ] 00:16:23.275 }, 00:16:23.275 { 00:16:23.275 "subsystem": "sock", 00:16:23.275 "config": [ 00:16:23.275 { 00:16:23.275 "method": "sock_set_default_impl", 00:16:23.275 "params": { 00:16:23.275 "impl_name": "posix" 00:16:23.275 } 00:16:23.275 }, 00:16:23.275 { 00:16:23.275 "method": "sock_impl_set_options", 00:16:23.275 "params": { 00:16:23.275 "enable_ktls": false, 00:16:23.275 "enable_placement_id": 0, 00:16:23.275 "enable_quickack": false, 00:16:23.275 "enable_recv_pipe": true, 00:16:23.275 "enable_zerocopy_send_client": false, 00:16:23.275 "enable_zerocopy_send_server": true, 00:16:23.275 "impl_name": "ssl", 00:16:23.275 "recv_buf_size": 4096, 00:16:23.275 "send_buf_size": 4096, 00:16:23.275 "tls_version": 0, 00:16:23.275 "zerocopy_threshold": 0 00:16:23.275 } 00:16:23.275 }, 00:16:23.275 { 00:16:23.275 "method": "sock_impl_set_options", 00:16:23.275 "params": { 00:16:23.275 "enable_ktls": false, 00:16:23.275 "enable_placement_id": 0, 00:16:23.275 "enable_quickack": false, 00:16:23.275 "enable_recv_pipe": true, 00:16:23.275 "enable_zerocopy_send_client": false, 00:16:23.275 "enable_zerocopy_send_server": true, 00:16:23.275 "impl_name": "posix", 00:16:23.275 "recv_buf_size": 2097152, 00:16:23.275 "send_buf_size": 2097152, 00:16:23.275 "tls_version": 0, 00:16:23.275 "zerocopy_threshold": 0 00:16:23.275 } 00:16:23.275 } 00:16:23.275 ] 00:16:23.275 }, 00:16:23.275 { 00:16:23.275 "subsystem": "vmd", 00:16:23.275 "config": [] 00:16:23.275 }, 00:16:23.275 { 00:16:23.275 "subsystem": "accel", 00:16:23.275 "config": [ 00:16:23.275 { 00:16:23.275 "method": "accel_set_options", 00:16:23.275 "params": { 00:16:23.275 "buf_count": 2048, 00:16:23.275 "large_cache_size": 16, 00:16:23.275 "sequence_count": 2048, 00:16:23.275 "small_cache_size": 128, 00:16:23.275 "task_count": 2048 00:16:23.275 } 00:16:23.275 } 00:16:23.275 ] 00:16:23.275 }, 00:16:23.275 { 00:16:23.275 "subsystem": "bdev", 00:16:23.275 "config": [ 00:16:23.275 { 00:16:23.275 "method": "bdev_set_options", 00:16:23.275 "params": { 00:16:23.275 "bdev_auto_examine": true, 00:16:23.275 "bdev_io_cache_size": 256, 00:16:23.275 "bdev_io_pool_size": 65535, 00:16:23.275 "iobuf_large_cache_size": 16, 00:16:23.275 "iobuf_small_cache_size": 128 00:16:23.275 } 00:16:23.275 }, 00:16:23.275 { 00:16:23.275 "method": "bdev_raid_set_options", 00:16:23.275 "params": { 00:16:23.275 "process_window_size_kb": 1024 00:16:23.275 } 00:16:23.275 }, 00:16:23.275 { 00:16:23.275 "method": "bdev_iscsi_set_options", 00:16:23.275 "params": { 00:16:23.275 "timeout_sec": 30 00:16:23.275 } 00:16:23.276 }, 00:16:23.276 { 00:16:23.276 "method": "bdev_nvme_set_options", 00:16:23.276 "params": { 00:16:23.276 "action_on_timeout": "none", 00:16:23.276 "allow_accel_sequence": false, 00:16:23.276 "arbitration_burst": 0, 00:16:23.276 "bdev_retry_count": 3, 00:16:23.276 "ctrlr_loss_timeout_sec": 0, 00:16:23.276 "delay_cmd_submit": true, 00:16:23.276 "dhchap_dhgroups": [ 00:16:23.276 "null", 00:16:23.276 "ffdhe2048", 00:16:23.276 "ffdhe3072", 00:16:23.276 "ffdhe4096", 00:16:23.276 "ffdhe6144", 00:16:23.276 "ffdhe8192" 00:16:23.276 ], 00:16:23.276 "dhchap_digests": [ 00:16:23.276 "sha256", 00:16:23.276 "sha384", 00:16:23.276 "sha512" 00:16:23.276 ], 00:16:23.276 "disable_auto_failback": false, 00:16:23.276 "fast_io_fail_timeout_sec": 0, 00:16:23.276 "generate_uuids": false, 00:16:23.276 "high_priority_weight": 0, 00:16:23.276 "io_path_stat": false, 00:16:23.276 "io_queue_requests": 512, 00:16:23.276 "keep_alive_timeout_ms": 10000, 00:16:23.276 "low_priority_weight": 0, 00:16:23.276 "medium_priority_weight": 0, 00:16:23.276 "nvme_adminq_poll_period_us": 10000, 00:16:23.276 "nvme_error_stat": false, 00:16:23.276 "nvme_ioq_poll_period_us": 0, 00:16:23.276 "rdma_cm_event_timeout_ms": 0, 00:16:23.276 "rdma_max_cq_size": 0, 00:16:23.276 "rdma_srq_size": 0, 00:16:23.276 "reconnect_delay_sec": 0, 00:16:23.276 "timeout_admin_us": 0, 00:16:23.276 "timeout_us": 0, 00:16:23.276 "transport_ack_timeout": 0, 00:16:23.276 "transport_retry_count": 4, 00:16:23.276 "transport_tos": 0 00:16:23.276 } 00:16:23.276 }, 00:16:23.276 { 00:16:23.276 "method": "bdev_nvme_attach_controller", 00:16:23.276 "params": { 00:16:23.276 "adrfam": "IPv4", 00:16:23.276 "ctrlr_loss_timeout_sec": 0, 00:16:23.276 "ddgst": false, 00:16:23.276 "fast_io_fail_timeout_sec": 0, 00:16:23.276 "hdgst": false, 00:16:23.276 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:23.276 "name": "nvme0", 00:16:23.276 "prchk_guard": false, 00:16:23.276 "prchk_reftag": false, 00:16:23.276 "psk": "key0", 00:16:23.276 "reconnect_delay_sec": 0, 00:16:23.276 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:23.276 "traddr": "10.0.0.2", 00:16:23.276 "trsvcid": "4420", 00:16:23.276 "trtype": "TCP" 00:16:23.276 } 00:16:23.276 }, 00:16:23.276 { 00:16:23.276 "method": "bdev_nvme_set_hotplug", 00:16:23.276 "params": { 00:16:23.276 "enable": false, 00:16:23.276 "period_us": 100000 00:16:23.276 } 00:16:23.276 }, 00:16:23.276 { 00:16:23.276 "method": "bdev_enable_histogram", 00:16:23.276 "params": { 00:16:23.276 "enable": true, 00:16:23.276 "name": "nvme0n1" 00:16:23.276 } 00:16:23.276 }, 00:16:23.276 { 00:16:23.276 "method": "bdev_wait_for_examine" 00:16:23.276 } 00:16:23.276 ] 00:16:23.276 }, 00:16:23.276 { 00:16:23.276 "subsystem": "nbd", 00:16:23.276 "config": [] 00:16:23.276 } 00:16:23.276 ] 00:16:23.276 }' 00:16:23.276 16:02:16 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 85584 00:16:23.276 16:02:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85584 ']' 00:16:23.276 16:02:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85584 00:16:23.276 16:02:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:23.276 16:02:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:23.276 16:02:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85584 00:16:23.276 16:02:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:23.276 killing process with pid 85584 00:16:23.276 16:02:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:23.276 16:02:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85584' 00:16:23.276 Received shutdown signal, test time was about 1.000000 seconds 00:16:23.276 00:16:23.276 Latency(us) 00:16:23.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:23.276 =================================================================================================================== 00:16:23.276 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:23.276 16:02:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85584 00:16:23.276 16:02:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85584 00:16:23.533 16:02:17 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 85534 00:16:23.533 16:02:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85534 ']' 00:16:23.533 16:02:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85534 00:16:23.533 16:02:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:23.533 16:02:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:23.533 16:02:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85534 00:16:23.533 16:02:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:23.533 16:02:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:23.533 16:02:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85534' 00:16:23.533 killing process with pid 85534 00:16:23.533 16:02:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85534 00:16:23.533 16:02:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85534 00:16:23.791 16:02:17 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:16:23.791 16:02:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:23.791 16:02:17 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:16:23.791 "subsystems": [ 00:16:23.791 { 00:16:23.791 "subsystem": "keyring", 00:16:23.791 "config": [ 00:16:23.791 { 00:16:23.791 "method": "keyring_file_add_key", 00:16:23.791 "params": { 00:16:23.791 "name": "key0", 00:16:23.791 "path": "/tmp/tmp.JgefRAlb9G" 00:16:23.791 } 00:16:23.791 } 00:16:23.791 ] 00:16:23.791 }, 00:16:23.791 { 00:16:23.791 "subsystem": "iobuf", 00:16:23.791 "config": [ 00:16:23.791 { 00:16:23.791 "method": "iobuf_set_options", 00:16:23.791 "params": { 00:16:23.791 "large_bufsize": 135168, 00:16:23.791 "large_pool_count": 1024, 00:16:23.791 "small_bufsize": 8192, 00:16:23.791 "small_pool_count": 8192 00:16:23.791 } 00:16:23.791 } 00:16:23.791 ] 00:16:23.791 }, 00:16:23.791 { 00:16:23.791 "subsystem": "sock", 00:16:23.791 "config": [ 00:16:23.791 { 00:16:23.791 "method": "sock_set_default_impl", 00:16:23.791 "params": { 00:16:23.791 "impl_name": "posix" 00:16:23.791 } 00:16:23.791 }, 00:16:23.791 { 00:16:23.791 "method": "sock_impl_set_options", 00:16:23.791 "params": { 00:16:23.791 "enable_ktls": false, 00:16:23.791 "enable_placement_id": 0, 00:16:23.791 "enable_quickack": false, 00:16:23.791 "enable_recv_pipe": true, 00:16:23.791 "enable_zerocopy_send_client": false, 00:16:23.791 "enable_zerocopy_send_server": true, 00:16:23.791 "impl_name": "ssl", 00:16:23.791 "recv_buf_size": 4096, 00:16:23.791 "send_buf_size": 4096, 00:16:23.791 "tls_version": 0, 00:16:23.791 "zerocopy_threshold": 0 00:16:23.791 } 00:16:23.791 }, 00:16:23.791 { 00:16:23.791 "method": "sock_impl_set_options", 00:16:23.791 "params": { 00:16:23.791 "enable_ktls": false, 00:16:23.791 "enable_placement_id": 0, 00:16:23.791 "enable_quickack": false, 00:16:23.791 "enable_recv_pipe": true, 00:16:23.791 "enable_zerocopy_send_client": false, 00:16:23.791 "enable_zerocopy_send_server": true, 00:16:23.791 "impl_name": "posix", 00:16:23.791 "recv_buf_size": 2097152, 00:16:23.791 "send_buf_size": 2097152, 00:16:23.791 "tls_version": 0, 00:16:23.791 "zerocopy_threshold": 0 00:16:23.791 } 00:16:23.791 } 00:16:23.791 ] 00:16:23.791 }, 00:16:23.791 { 00:16:23.791 "subsystem": "vmd", 00:16:23.791 "config": [] 00:16:23.791 }, 00:16:23.791 { 00:16:23.791 "subsystem": "accel", 00:16:23.791 "config": [ 00:16:23.791 { 00:16:23.791 "method": "accel_set_options", 00:16:23.791 "params": { 00:16:23.791 "buf_count": 2048, 00:16:23.791 "large_cache_size": 16, 00:16:23.791 "sequence_count": 2048, 00:16:23.791 "small_cache_size": 128, 00:16:23.791 "task_count": 2048 00:16:23.791 } 00:16:23.791 } 00:16:23.791 ] 00:16:23.791 }, 00:16:23.791 { 00:16:23.791 "subsystem": "bdev", 00:16:23.791 "config": [ 00:16:23.791 { 00:16:23.791 "method": "bdev_set_options", 00:16:23.791 "params": { 00:16:23.791 "bdev_auto_examine": true, 00:16:23.791 "bdev_io_cache_size": 256, 00:16:23.791 "bdev_io_pool_size": 65535, 00:16:23.791 "iobuf_large_cache_size": 16, 00:16:23.791 "iobuf_small_cache_size": 128 00:16:23.791 } 00:16:23.791 }, 00:16:23.791 { 00:16:23.791 "method": "bdev_raid_set_options", 00:16:23.791 "params": { 00:16:23.791 "process_window_size_kb": 1024 00:16:23.791 } 00:16:23.791 }, 00:16:23.791 { 00:16:23.791 "method": "bdev_iscsi_set_options", 00:16:23.791 "params": { 00:16:23.791 "timeout_sec": 30 00:16:23.791 } 00:16:23.791 }, 00:16:23.791 { 00:16:23.791 "method": "bdev_nvme_set_options", 00:16:23.791 "params": { 00:16:23.791 "action_on_timeout": "none", 00:16:23.791 "allow_accel_sequence": false, 00:16:23.791 "arbitration_burst": 0, 00:16:23.791 "bdev_retry_count": 3, 00:16:23.791 "ctrlr_loss_timeout_sec": 0, 00:16:23.791 "delay_cmd_submit": true, 00:16:23.791 "dhchap_dhgroups": [ 00:16:23.791 "null", 00:16:23.792 "ffdhe2048", 00:16:23.792 "ffdhe3072", 00:16:23.792 "ffdhe4096", 00:16:23.792 "ffdhe6144", 00:16:23.792 "ffdhe8192" 00:16:23.792 ], 00:16:23.792 "dhchap_digests": [ 00:16:23.792 "sha256", 00:16:23.792 "sha384", 00:16:23.792 "sha512" 00:16:23.792 ], 00:16:23.792 "disable_auto_failback": false, 00:16:23.792 "fast_io_fail_timeout_sec": 0, 00:16:23.792 "generate_uuids": false, 00:16:23.792 "high_priority_weight": 0, 00:16:23.792 "io_path_stat": false, 00:16:23.792 "io_queue_requests": 0, 00:16:23.792 "keep_alive_timeout_ms": 10000, 00:16:23.792 "low_priority_weight": 0, 00:16:23.792 "medium_priority_weight": 0, 00:16:23.792 "nvme_adminq_poll_period_us": 10000, 00:16:23.792 "nvme_error_stat": false, 00:16:23.792 "nvme_ioq_poll_period_us": 0, 00:16:23.792 "rdma_cm_event_timeout_ms": 0, 00:16:23.792 "rdma_max_cq_size": 0, 00:16:23.792 "rdma_srq_size": 0, 00:16:23.792 "reconnect_delay_sec": 0, 00:16:23.792 "timeout_admin_us": 0, 00:16:23.792 "timeout_us": 0, 00:16:23.792 "transport_ack_timeout": 0, 00:16:23.792 "transport_retry_count": 4, 00:16:23.792 "transport_tos": 0 00:16:23.792 } 00:16:23.792 }, 00:16:23.792 { 00:16:23.792 "method": "bdev_nvme_set_hotplug", 00:16:23.792 "params": { 00:16:23.792 "enable": false, 00:16:23.792 "period_us": 100000 00:16:23.792 } 00:16:23.792 }, 00:16:23.792 { 00:16:23.792 "method": "bdev_malloc_create", 00:16:23.792 "params": { 00:16:23.792 "block_size": 4096, 00:16:23.792 "name": "malloc0", 00:16:23.792 "num_blocks": 8192, 00:16:23.792 "optimal_io_boundary": 0, 00:16:23.792 "physical_block_size": 4096, 00:16:23.792 "uuid": "8d0c6577-6295-45a7-9f74-29d7e967c2d6" 00:16:23.792 } 00:16:23.792 }, 00:16:23.792 { 00:16:23.792 "method": "bdev_wait_for_examine" 00:16:23.792 } 00:16:23.792 ] 00:16:23.792 }, 00:16:23.792 { 00:16:23.792 "subsystem": "nbd", 00:16:23.792 "config": [] 00:16:23.792 }, 00:16:23.792 { 00:16:23.792 "subsystem": "scheduler", 00:16:23.792 "config": [ 00:16:23.792 { 00:16:23.792 "method": "framework_set_scheduler", 00:16:23.792 "params": { 00:16:23.792 "name": "static" 00:16:23.792 } 00:16:23.792 } 00:16:23.792 ] 00:16:23.792 }, 00:16:23.792 { 00:16:23.792 "subsystem": "nvmf", 00:16:23.792 "config": [ 00:16:23.792 { 00:16:23.792 "method": "nvmf_set_config", 00:16:23.792 "params": { 00:16:23.792 "admin_cmd_passthru": { 00:16:23.792 "identify_ctrlr": false 00:16:23.792 }, 00:16:23.792 "discovery_filter": "match_any" 00:16:23.792 } 00:16:23.792 }, 00:16:23.792 { 00:16:23.792 "method": "nvmf_set_max_subsystems", 00:16:23.792 "params": { 00:16:23.792 "max_subsystems": 1024 00:16:23.792 } 00:16:23.792 }, 00:16:23.792 { 00:16:23.792 "method": "nvmf_set_crdt", 00:16:23.792 "params": { 00:16:23.792 "crdt1": 0, 00:16:23.792 "crdt2": 0, 00:16:23.792 "crdt3": 0 00:16:23.792 } 00:16:23.792 }, 00:16:23.792 { 00:16:23.792 "method": "nvmf_create_transport", 00:16:23.792 "params": { 00:16:23.792 "abort_timeout_sec": 1, 00:16:23.792 "ack_timeout": 0, 00:16:23.792 "buf_cache_size": 4294967295, 00:16:23.792 "c2h_success": false, 00:16:23.792 "data_wr_pool_size": 0, 00:16:23.792 "dif_insert_or_strip": false, 00:16:23.792 "in_capsule_data_size": 4096, 00:16:23.792 "io_unit_size": 131072, 00:16:23.792 "max_aq_depth": 128, 00:16:23.792 "max_io_qpairs_per_ctrlr": 127, 00:16:23.792 "max_io_size": 131072, 00:16:23.792 "max_queue_depth": 128, 00:16:23.792 "num_shared_buffers": 511, 00:16:23.792 "sock_priority": 0, 00:16:23.792 "trtype": "TCP", 00:16:23.792 "zcopy": false 00:16:23.792 } 00:16:23.792 }, 00:16:23.792 { 00:16:23.792 "method": "nvmf_create_subsystem", 00:16:23.792 "params": { 00:16:23.792 "allow_any_host": false, 00:16:23.792 "ana_reporting": false, 00:16:23.792 "max_cntlid": 65519, 00:16:23.792 "max_namespaces": 32, 00:16:23.792 "min_cntlid": 1, 00:16:23.792 "model_number": "SPDK bdev Controller", 00:16:23.792 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:23.792 "serial_number": "00000000000000000000" 00:16:23.792 } 00:16:23.792 }, 00:16:23.792 { 00:16:23.792 "method": "nvmf_subsystem_add_host", 00:16:23.792 "params": { 00:16:23.792 "host": "nqn.2016-06.io.spdk:host1", 00:16:23.792 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:23.792 "psk": "key0" 00:16:23.792 } 00:16:23.792 }, 00:16:23.792 { 00:16:23.792 "method": "nvmf_subsystem_add_ns", 00:16:23.792 "params": { 00:16:23.792 "namespace": { 00:16:23.792 "bdev_name": "malloc0", 00:16:23.792 "nguid": "8D0C6577629545A79F7429D7E967C2D6", 00:16:23.792 "no_auto_visible": false, 00:16:23.792 "nsid": 1, 00:16:23.792 "uuid": "8d0c6577-6295-45a7-9f74-29d7e967c2d6" 00:16:23.792 }, 00:16:23.792 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:23.792 } 00:16:23.792 }, 00:16:23.792 { 00:16:23.792 "method": "nvmf_subsystem_add_listener", 00:16:23.792 "params": { 00:16:23.792 "listen_address": { 00:16:23.792 "adrfam": "IPv4", 00:16:23.792 "traddr": "10.0.0.2", 00:16:23.792 "trsvcid": "4420", 00:16:23.792 "trtype": "TCP" 00:16:23.792 }, 00:16:23.792 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:23.792 "secure_channel": true 00:16:23.792 } 00:16:23.792 } 00:16:23.792 ] 00:16:23.792 } 00:16:23.792 ] 00:16:23.792 }' 00:16:23.792 16:02:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:23.792 16:02:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:23.792 16:02:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85675 00:16:23.792 16:02:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85675 00:16:23.792 16:02:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:16:23.792 16:02:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85675 ']' 00:16:23.792 16:02:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.792 16:02:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:23.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.792 16:02:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.792 16:02:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:23.792 16:02:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:24.050 [2024-07-15 16:02:17.557052] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:16:24.050 [2024-07-15 16:02:17.557154] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:24.050 [2024-07-15 16:02:17.694167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.308 [2024-07-15 16:02:17.814069] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:24.308 [2024-07-15 16:02:17.814133] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:24.308 [2024-07-15 16:02:17.814145] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:24.308 [2024-07-15 16:02:17.814154] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:24.308 [2024-07-15 16:02:17.814161] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:24.308 [2024-07-15 16:02:17.814253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.566 [2024-07-15 16:02:18.056404] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:24.566 [2024-07-15 16:02:18.088333] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:24.566 [2024-07-15 16:02:18.088580] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:24.824 16:02:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:24.824 16:02:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:24.824 16:02:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:24.824 16:02:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:24.824 16:02:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:25.083 16:02:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:25.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:25.083 16:02:18 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=85719 00:16:25.083 16:02:18 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 85719 /var/tmp/bdevperf.sock 00:16:25.083 16:02:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85719 ']' 00:16:25.083 16:02:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:25.083 16:02:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:25.083 16:02:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:25.083 16:02:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:25.083 16:02:18 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:16:25.083 16:02:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:25.083 16:02:18 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:16:25.083 "subsystems": [ 00:16:25.083 { 00:16:25.083 "subsystem": "keyring", 00:16:25.083 "config": [ 00:16:25.083 { 00:16:25.083 "method": "keyring_file_add_key", 00:16:25.083 "params": { 00:16:25.083 "name": "key0", 00:16:25.083 "path": "/tmp/tmp.JgefRAlb9G" 00:16:25.083 } 00:16:25.083 } 00:16:25.083 ] 00:16:25.083 }, 00:16:25.083 { 00:16:25.083 "subsystem": "iobuf", 00:16:25.083 "config": [ 00:16:25.083 { 00:16:25.083 "method": "iobuf_set_options", 00:16:25.083 "params": { 00:16:25.083 "large_bufsize": 135168, 00:16:25.083 "large_pool_count": 1024, 00:16:25.083 "small_bufsize": 8192, 00:16:25.083 "small_pool_count": 8192 00:16:25.083 } 00:16:25.083 } 00:16:25.083 ] 00:16:25.083 }, 00:16:25.083 { 00:16:25.083 "subsystem": "sock", 00:16:25.083 "config": [ 00:16:25.083 { 00:16:25.083 "method": "sock_set_default_impl", 00:16:25.083 "params": { 00:16:25.083 "impl_name": "posix" 00:16:25.083 } 00:16:25.083 }, 00:16:25.083 { 00:16:25.083 "method": "sock_impl_set_options", 00:16:25.083 "params": { 00:16:25.083 "enable_ktls": false, 00:16:25.083 "enable_placement_id": 0, 00:16:25.083 "enable_quickack": false, 00:16:25.083 "enable_recv_pipe": true, 00:16:25.083 "enable_zerocopy_send_client": false, 00:16:25.083 "enable_zerocopy_send_server": true, 00:16:25.083 "impl_name": "ssl", 00:16:25.083 "recv_buf_size": 4096, 00:16:25.083 "send_buf_size": 4096, 00:16:25.083 "tls_version": 0, 00:16:25.083 "zerocopy_threshold": 0 00:16:25.083 } 00:16:25.083 }, 00:16:25.083 { 00:16:25.083 "method": "sock_impl_set_options", 00:16:25.083 "params": { 00:16:25.083 "enable_ktls": false, 00:16:25.083 "enable_placement_id": 0, 00:16:25.083 "enable_quickack": false, 00:16:25.083 "enable_recv_pipe": true, 00:16:25.084 "enable_zerocopy_send_client": false, 00:16:25.084 "enable_zerocopy_send_server": true, 00:16:25.084 "impl_name": "posix", 00:16:25.084 "recv_buf_size": 2097152, 00:16:25.084 "send_buf_size": 2097152, 00:16:25.084 "tls_version": 0, 00:16:25.084 "zerocopy_threshold": 0 00:16:25.084 } 00:16:25.084 } 00:16:25.084 ] 00:16:25.084 }, 00:16:25.084 { 00:16:25.084 "subsystem": "vmd", 00:16:25.084 "config": [] 00:16:25.084 }, 00:16:25.084 { 00:16:25.084 "subsystem": "accel", 00:16:25.084 "config": [ 00:16:25.084 { 00:16:25.084 "method": "accel_set_options", 00:16:25.084 "params": { 00:16:25.084 "buf_count": 2048, 00:16:25.084 "large_cache_size": 16, 00:16:25.084 "sequence_count": 2048, 00:16:25.084 "small_cache_size": 128, 00:16:25.084 "task_count": 2048 00:16:25.084 } 00:16:25.084 } 00:16:25.084 ] 00:16:25.084 }, 00:16:25.084 { 00:16:25.084 "subsystem": "bdev", 00:16:25.084 "config": [ 00:16:25.084 { 00:16:25.084 "method": "bdev_set_options", 00:16:25.084 "params": { 00:16:25.084 "bdev_auto_examine": true, 00:16:25.084 "bdev_io_cache_size": 256, 00:16:25.084 "bdev_io_pool_size": 65535, 00:16:25.084 "iobuf_large_cache_size": 16, 00:16:25.084 "iobuf_small_cache_size": 128 00:16:25.084 } 00:16:25.084 }, 00:16:25.084 { 00:16:25.084 "method": "bdev_raid_set_options", 00:16:25.084 "params": { 00:16:25.084 "process_window_size_kb": 1024 00:16:25.084 } 00:16:25.084 }, 00:16:25.084 { 00:16:25.084 "method": "bdev_iscsi_set_options", 00:16:25.084 "params": { 00:16:25.084 "timeout_sec": 30 00:16:25.084 } 00:16:25.084 }, 00:16:25.084 { 00:16:25.084 "method": "bdev_nvme_set_options", 00:16:25.084 "params": { 00:16:25.084 "action_on_timeout": "none", 00:16:25.084 "allow_accel_sequence": false, 00:16:25.084 "arbitration_burst": 0, 00:16:25.084 "bdev_retry_count": 3, 00:16:25.084 "ctrlr_loss_timeout_sec": 0, 00:16:25.084 "delay_cmd_submit": true, 00:16:25.084 "dhchap_dhgroups": [ 00:16:25.084 "null", 00:16:25.084 "ffdhe2048", 00:16:25.084 "ffdhe3072", 00:16:25.084 "ffdhe4096", 00:16:25.084 "ffdhe6144", 00:16:25.084 "ffdhe8192" 00:16:25.084 ], 00:16:25.084 "dhchap_digests": [ 00:16:25.084 "sha256", 00:16:25.084 "sha384", 00:16:25.084 "sha512" 00:16:25.084 ], 00:16:25.084 "disable_auto_failback": false, 00:16:25.084 "fast_io_fail_timeout_sec": 0, 00:16:25.084 "generate_uuids": false, 00:16:25.084 "high_priority_weight": 0, 00:16:25.084 "io_path_stat": false, 00:16:25.084 "io_queue_requests": 512, 00:16:25.084 "keep_alive_timeout_ms": 10000, 00:16:25.084 "low_priority_weight": 0, 00:16:25.084 "medium_priority_weight": 0, 00:16:25.084 "nvme_adminq_poll_period_us": 10000, 00:16:25.084 "nvme_error_stat": false, 00:16:25.084 "nvme_ioq_poll_period_us": 0, 00:16:25.084 "rdma_cm_event_timeout_ms": 0, 00:16:25.084 "rdma_max_cq_size": 0, 00:16:25.084 "rdma_srq_size": 0, 00:16:25.084 "reconnect_delay_sec": 0, 00:16:25.084 "timeout_admin_us": 0, 00:16:25.084 "timeout_us": 0, 00:16:25.084 "transport_ack_timeout": 0, 00:16:25.084 "transport_retry_count": 4, 00:16:25.084 "transport_tos": 0 00:16:25.084 } 00:16:25.084 }, 00:16:25.084 { 00:16:25.084 "method": "bdev_nvme_attach_controller", 00:16:25.084 "params": { 00:16:25.084 "adrfam": "IPv4", 00:16:25.084 "ctrlr_loss_timeout_sec": 0, 00:16:25.084 "ddgst": false, 00:16:25.084 "fast_io_fail_timeout_sec": 0, 00:16:25.084 "hdgst": false, 00:16:25.084 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:25.084 "name": "nvme0", 00:16:25.084 "prchk_guard": false, 00:16:25.084 "prchk_reftag": false, 00:16:25.084 "psk": "key0", 00:16:25.084 "reconnect_delay_sec": 0, 00:16:25.084 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:25.084 "traddr": "10.0.0.2", 00:16:25.084 "trsvcid": "4420", 00:16:25.084 "trtype": "TCP" 00:16:25.084 } 00:16:25.084 }, 00:16:25.084 { 00:16:25.084 "method": "bdev_nvme_set_hotplug", 00:16:25.084 "params": { 00:16:25.084 "enable": false, 00:16:25.084 "period_us": 100000 00:16:25.084 } 00:16:25.084 }, 00:16:25.084 { 00:16:25.084 "method": "bdev_enable_histogram", 00:16:25.084 "params": { 00:16:25.084 "enable": true, 00:16:25.084 "name": "nvme0n1" 00:16:25.084 } 00:16:25.084 }, 00:16:25.084 { 00:16:25.084 "method": "bdev_wait_for_examine" 00:16:25.084 } 00:16:25.084 ] 00:16:25.084 }, 00:16:25.084 { 00:16:25.084 "subsystem": "nbd", 00:16:25.084 "config": [] 00:16:25.084 } 00:16:25.084 ] 00:16:25.084 }' 00:16:25.084 [2024-07-15 16:02:18.636930] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:16:25.084 [2024-07-15 16:02:18.637101] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85719 ] 00:16:25.084 [2024-07-15 16:02:18.781627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.343 [2024-07-15 16:02:18.908810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.601 [2024-07-15 16:02:19.082998] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:26.213 16:02:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:26.213 16:02:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:26.213 16:02:19 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:26.213 16:02:19 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:16:26.471 16:02:19 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.471 16:02:19 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:26.471 Running I/O for 1 seconds... 00:16:27.846 00:16:27.846 Latency(us) 00:16:27.846 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.846 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:27.846 Verification LBA range: start 0x0 length 0x2000 00:16:27.846 nvme0n1 : 1.03 3721.43 14.54 0.00 0.00 34003.44 12213.53 24784.52 00:16:27.846 =================================================================================================================== 00:16:27.846 Total : 3721.43 14.54 0.00 0.00 34003.44 12213.53 24784.52 00:16:27.846 0 00:16:27.846 16:02:21 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:16:27.846 16:02:21 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:16:27.846 16:02:21 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:16:27.846 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:16:27.846 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:16:27.846 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:16:27.846 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:27.846 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:16:27.846 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:16:27.846 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:16:27.846 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:27.846 nvmf_trace.0 00:16:27.846 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:16:27.846 16:02:21 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 85719 00:16:27.846 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85719 ']' 00:16:27.846 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85719 00:16:27.846 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:27.846 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:27.846 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85719 00:16:27.846 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:27.846 killing process with pid 85719 00:16:27.846 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:27.846 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85719' 00:16:27.846 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85719 00:16:27.846 Received shutdown signal, test time was about 1.000000 seconds 00:16:27.846 00:16:27.846 Latency(us) 00:16:27.846 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.846 =================================================================================================================== 00:16:27.846 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:27.846 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85719 00:16:27.846 16:02:21 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:16:27.846 16:02:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:27.846 16:02:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:16:27.846 16:02:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:27.846 16:02:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:16:27.846 16:02:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:27.846 16:02:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:27.846 rmmod nvme_tcp 00:16:27.846 rmmod nvme_fabrics 00:16:28.104 rmmod nvme_keyring 00:16:28.104 16:02:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:28.104 16:02:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:16:28.104 16:02:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:16:28.104 16:02:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 85675 ']' 00:16:28.104 16:02:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 85675 00:16:28.104 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85675 ']' 00:16:28.104 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85675 00:16:28.104 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:28.104 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:28.104 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85675 00:16:28.104 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:28.104 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:28.104 killing process with pid 85675 00:16:28.104 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85675' 00:16:28.104 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85675 00:16:28.105 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85675 00:16:28.363 16:02:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:28.363 16:02:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:28.363 16:02:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:28.363 16:02:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:28.363 16:02:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:28.363 16:02:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.363 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:28.363 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.363 16:02:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:28.363 16:02:21 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.haDzfM17lX /tmp/tmp.aQrBs5MIUT /tmp/tmp.JgefRAlb9G 00:16:28.363 ************************************ 00:16:28.363 END TEST nvmf_tls 00:16:28.363 ************************************ 00:16:28.363 00:16:28.363 real 1m29.440s 00:16:28.363 user 2m22.826s 00:16:28.363 sys 0m28.522s 00:16:28.363 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:28.363 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:28.363 16:02:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:28.363 16:02:21 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:28.363 16:02:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:28.363 16:02:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:28.363 16:02:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:28.363 ************************************ 00:16:28.363 START TEST nvmf_fips 00:16:28.363 ************************************ 00:16:28.363 16:02:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:28.363 * Looking for test storage... 00:16:28.363 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:16:28.363 16:02:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:28.363 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:16:28.363 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:28.363 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:28.363 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:28.363 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:28.363 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:28.363 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:28.363 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:28.363 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:28.363 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:28.363 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:28.363 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:16:28.363 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:16:28.363 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:28.363 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:28.363 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:28.363 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:28.363 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:28.363 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:28.363 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:28.363 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:28.363 16:02:22 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.363 16:02:22 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.363 16:02:22 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.363 16:02:22 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:16:28.363 16:02:22 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.363 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:16:28.363 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:28.363 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:28.363 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:28.364 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:28.364 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:28.364 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:28.364 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:28.364 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:28.364 16:02:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:28.364 16:02:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:16:28.364 16:02:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:16:28.364 16:02:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:16:28.364 16:02:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:16:28.624 Error setting digest 00:16:28.624 0052A45BC27F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:16:28.624 0052A45BC27F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:16:28.624 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:28.625 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:28.625 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:28.625 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:28.625 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:28.625 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.625 16:02:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:28.625 16:02:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.625 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:28.625 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:28.625 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:28.625 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:28.625 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:28.625 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:28.625 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:28.625 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:28.625 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:28.625 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:28.625 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:28.625 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:28.625 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:28.625 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:28.625 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:28.625 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:28.625 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:28.625 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:28.625 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:28.625 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:28.625 Cannot find device "nvmf_tgt_br" 00:16:28.625 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:16:28.625 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:28.625 Cannot find device "nvmf_tgt_br2" 00:16:28.625 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:16:28.625 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:28.625 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:28.625 Cannot find device "nvmf_tgt_br" 00:16:28.625 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:16:28.625 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:28.625 Cannot find device "nvmf_tgt_br2" 00:16:28.625 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:16:28.625 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:28.883 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:28.883 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:28.883 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:28.883 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:16:28.883 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:28.883 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:28.883 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:16:28.883 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:28.884 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:28.884 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:28.884 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:28.884 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:28.884 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:28.884 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:28.884 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:28.884 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:28.884 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:28.884 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:28.884 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:28.884 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:28.884 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:28.884 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:28.884 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:28.884 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:28.884 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:28.884 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:28.884 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:28.884 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:28.884 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:28.884 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:28.884 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:28.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:28.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:16:28.884 00:16:28.884 --- 10.0.0.2 ping statistics --- 00:16:28.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.884 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:16:28.884 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:28.884 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:28.884 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.115 ms 00:16:28.884 00:16:28.884 --- 10.0.0.3 ping statistics --- 00:16:28.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.884 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:16:28.884 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:28.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:28.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:16:28.884 00:16:28.884 --- 10.0.0.1 ping statistics --- 00:16:28.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.884 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:16:28.884 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:28.884 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:16:28.884 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:28.884 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:28.884 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:28.884 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:28.884 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:28.884 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:28.884 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:29.142 16:02:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:16:29.142 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:29.142 16:02:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:29.142 16:02:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:29.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.143 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=86002 00:16:29.143 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 86002 00:16:29.143 16:02:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 86002 ']' 00:16:29.143 16:02:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:29.143 16:02:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.143 16:02:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:29.143 16:02:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.143 16:02:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:29.143 16:02:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:29.143 [2024-07-15 16:02:22.720862] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:16:29.143 [2024-07-15 16:02:22.721275] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:29.143 [2024-07-15 16:02:22.859193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.401 [2024-07-15 16:02:22.983783] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:29.401 [2024-07-15 16:02:22.983835] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:29.401 [2024-07-15 16:02:22.983847] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:29.401 [2024-07-15 16:02:22.983856] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:29.401 [2024-07-15 16:02:22.983864] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:29.401 [2024-07-15 16:02:22.983889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:29.989 16:02:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:29.989 16:02:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:16:29.989 16:02:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:29.989 16:02:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:29.989 16:02:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:30.247 16:02:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:30.247 16:02:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:16:30.247 16:02:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:30.247 16:02:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:30.247 16:02:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:30.247 16:02:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:30.247 16:02:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:30.247 16:02:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:30.247 16:02:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:30.506 [2024-07-15 16:02:23.995808] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:30.506 [2024-07-15 16:02:24.011748] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:30.506 [2024-07-15 16:02:24.011982] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:30.506 [2024-07-15 16:02:24.043191] tcp.c:3710:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:30.506 malloc0 00:16:30.506 16:02:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:30.506 16:02:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=86054 00:16:30.506 16:02:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 86054 /var/tmp/bdevperf.sock 00:16:30.506 16:02:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:30.506 16:02:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 86054 ']' 00:16:30.506 16:02:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:30.506 16:02:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:30.506 16:02:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:30.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:30.506 16:02:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:30.506 16:02:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:30.506 [2024-07-15 16:02:24.147432] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:16:30.506 [2024-07-15 16:02:24.147794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86054 ] 00:16:30.788 [2024-07-15 16:02:24.283807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.788 [2024-07-15 16:02:24.414976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:31.721 16:02:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:31.721 16:02:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:16:31.721 16:02:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:31.721 [2024-07-15 16:02:25.353648] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:31.721 [2024-07-15 16:02:25.354257] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:31.721 TLSTESTn1 00:16:31.979 16:02:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:31.979 Running I/O for 10 seconds... 00:16:41.948 00:16:41.948 Latency(us) 00:16:41.948 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:41.948 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:41.948 Verification LBA range: start 0x0 length 0x2000 00:16:41.948 TLSTESTn1 : 10.03 3822.57 14.93 0.00 0.00 33406.65 8817.57 28120.90 00:16:41.948 =================================================================================================================== 00:16:41.948 Total : 3822.57 14.93 0.00 0.00 33406.65 8817.57 28120.90 00:16:41.948 0 00:16:41.948 16:02:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:16:41.948 16:02:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:16:41.948 16:02:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:16:41.948 16:02:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:16:41.948 16:02:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:16:41.948 16:02:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:41.948 16:02:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:16:41.948 16:02:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:16:41.948 16:02:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:16:41.948 16:02:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:41.948 nvmf_trace.0 00:16:42.207 16:02:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:16:42.207 16:02:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 86054 00:16:42.207 16:02:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 86054 ']' 00:16:42.207 16:02:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 86054 00:16:42.207 16:02:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:16:42.207 16:02:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:42.207 16:02:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86054 00:16:42.207 killing process with pid 86054 00:16:42.207 Received shutdown signal, test time was about 10.000000 seconds 00:16:42.207 00:16:42.207 Latency(us) 00:16:42.207 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.207 =================================================================================================================== 00:16:42.207 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:42.207 16:02:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:42.207 16:02:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:42.207 16:02:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86054' 00:16:42.207 16:02:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 86054 00:16:42.207 [2024-07-15 16:02:35.706500] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:42.207 16:02:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 86054 00:16:42.465 16:02:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:16:42.465 16:02:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:42.465 16:02:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:16:42.465 16:02:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:42.465 16:02:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:16:42.465 16:02:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:42.465 16:02:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:42.465 rmmod nvme_tcp 00:16:42.465 rmmod nvme_fabrics 00:16:42.465 rmmod nvme_keyring 00:16:42.465 16:02:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:42.465 16:02:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:16:42.465 16:02:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:16:42.465 16:02:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 86002 ']' 00:16:42.465 16:02:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 86002 00:16:42.465 16:02:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 86002 ']' 00:16:42.465 16:02:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 86002 00:16:42.465 16:02:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:16:42.465 16:02:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:42.465 16:02:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86002 00:16:42.465 killing process with pid 86002 00:16:42.465 16:02:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:42.465 16:02:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:42.465 16:02:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86002' 00:16:42.465 16:02:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 86002 00:16:42.465 [2024-07-15 16:02:36.060091] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:42.465 16:02:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 86002 00:16:42.731 16:02:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:42.731 16:02:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:42.731 16:02:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:42.731 16:02:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:42.731 16:02:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:42.731 16:02:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.731 16:02:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:42.731 16:02:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.731 16:02:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:42.731 16:02:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:42.731 00:16:42.731 real 0m14.373s 00:16:42.731 user 0m19.543s 00:16:42.731 sys 0m5.724s 00:16:42.731 16:02:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:42.731 16:02:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:42.731 ************************************ 00:16:42.731 END TEST nvmf_fips 00:16:42.731 ************************************ 00:16:42.731 16:02:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:42.731 16:02:36 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:16:42.731 16:02:36 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:16:42.731 16:02:36 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:16:42.731 16:02:36 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:42.731 16:02:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:42.731 16:02:36 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:16:42.731 16:02:36 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:42.731 16:02:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:42.731 16:02:36 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:16:42.731 16:02:36 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:16:42.731 16:02:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:42.731 16:02:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:42.731 16:02:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:42.731 ************************************ 00:16:42.731 START TEST nvmf_multicontroller 00:16:42.731 ************************************ 00:16:42.731 16:02:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:16:43.017 * Looking for test storage... 00:16:43.017 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:43.017 Cannot find device "nvmf_tgt_br" 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # true 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:43.017 Cannot find device "nvmf_tgt_br2" 00:16:43.017 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # true 00:16:43.018 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:43.018 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:43.018 Cannot find device "nvmf_tgt_br" 00:16:43.018 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # true 00:16:43.018 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:43.018 Cannot find device "nvmf_tgt_br2" 00:16:43.018 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # true 00:16:43.018 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:43.018 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:43.018 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:43.018 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:43.018 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:16:43.018 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:43.018 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:43.018 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:16:43.018 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:43.018 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:43.018 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:43.018 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:43.018 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:43.018 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:43.276 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:43.276 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:16:43.276 00:16:43.276 --- 10.0.0.2 ping statistics --- 00:16:43.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.276 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:43.276 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:43.276 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:16:43.276 00:16:43.276 --- 10.0.0.3 ping statistics --- 00:16:43.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.276 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:43.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:43.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:16:43.276 00:16:43.276 --- 10.0.0.1 ping statistics --- 00:16:43.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.276 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@433 -- # return 0 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=86423 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 86423 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 86423 ']' 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:43.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:43.276 16:02:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:43.276 [2024-07-15 16:02:36.961815] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:16:43.276 [2024-07-15 16:02:36.961978] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.538 [2024-07-15 16:02:37.097642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:43.538 [2024-07-15 16:02:37.225664] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:43.538 [2024-07-15 16:02:37.225902] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:43.538 [2024-07-15 16:02:37.226014] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:43.538 [2024-07-15 16:02:37.226091] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:43.538 [2024-07-15 16:02:37.226165] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:43.538 [2024-07-15 16:02:37.226674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:43.538 [2024-07-15 16:02:37.226829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:43.538 [2024-07-15 16:02:37.226933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:44.472 [2024-07-15 16:02:38.061178] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:44.472 Malloc0 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:44.472 [2024-07-15 16:02:38.130038] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:44.472 [2024-07-15 16:02:38.137888] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:44.472 Malloc1 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.472 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:44.730 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.730 16:02:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=86475 00:16:44.730 16:02:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:16:44.730 16:02:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:44.730 16:02:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 86475 /var/tmp/bdevperf.sock 00:16:44.730 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 86475 ']' 00:16:44.730 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:44.730 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:44.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:44.730 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:44.730 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:44.730 16:02:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:45.663 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:45.663 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:16:45.663 16:02:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:16:45.663 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.663 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:45.922 NVMe0n1 00:16:45.922 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.922 16:02:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:45.922 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.922 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:45.922 16:02:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:16:45.922 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.922 1 00:16:45.922 16:02:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:45.922 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:16:45.922 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:45.922 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:45.922 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:45.922 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:45.922 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:45.922 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:45.922 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.922 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:45.922 2024/07/15 16:02:39 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:16:45.922 request: 00:16:45.922 { 00:16:45.922 "method": "bdev_nvme_attach_controller", 00:16:45.922 "params": { 00:16:45.922 "name": "NVMe0", 00:16:45.922 "trtype": "tcp", 00:16:45.922 "traddr": "10.0.0.2", 00:16:45.922 "adrfam": "ipv4", 00:16:45.922 "trsvcid": "4420", 00:16:45.922 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:45.922 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:16:45.922 "hostaddr": "10.0.0.2", 00:16:45.922 "hostsvcid": "60000", 00:16:45.922 "prchk_reftag": false, 00:16:45.922 "prchk_guard": false, 00:16:45.922 "hdgst": false, 00:16:45.922 "ddgst": false 00:16:45.922 } 00:16:45.922 } 00:16:45.922 Got JSON-RPC error response 00:16:45.922 GoRPCClient: error on JSON-RPC call 00:16:45.922 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:45.922 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:16:45.922 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:45.922 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:45.922 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:45.922 16:02:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:45.922 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:16:45.922 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:45.922 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:45.922 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:45.922 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:45.922 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:45.922 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:45.922 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.922 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:45.923 2024/07/15 16:02:39 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:16:45.923 request: 00:16:45.923 { 00:16:45.923 "method": "bdev_nvme_attach_controller", 00:16:45.923 "params": { 00:16:45.923 "name": "NVMe0", 00:16:45.923 "trtype": "tcp", 00:16:45.923 "traddr": "10.0.0.2", 00:16:45.923 "adrfam": "ipv4", 00:16:45.923 "trsvcid": "4420", 00:16:45.923 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:45.923 "hostaddr": "10.0.0.2", 00:16:45.923 "hostsvcid": "60000", 00:16:45.923 "prchk_reftag": false, 00:16:45.923 "prchk_guard": false, 00:16:45.923 "hdgst": false, 00:16:45.923 "ddgst": false 00:16:45.923 } 00:16:45.923 } 00:16:45.923 Got JSON-RPC error response 00:16:45.923 GoRPCClient: error on JSON-RPC call 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:45.923 2024/07/15 16:02:39 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:16:45.923 request: 00:16:45.923 { 00:16:45.923 "method": "bdev_nvme_attach_controller", 00:16:45.923 "params": { 00:16:45.923 "name": "NVMe0", 00:16:45.923 "trtype": "tcp", 00:16:45.923 "traddr": "10.0.0.2", 00:16:45.923 "adrfam": "ipv4", 00:16:45.923 "trsvcid": "4420", 00:16:45.923 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:45.923 "hostaddr": "10.0.0.2", 00:16:45.923 "hostsvcid": "60000", 00:16:45.923 "prchk_reftag": false, 00:16:45.923 "prchk_guard": false, 00:16:45.923 "hdgst": false, 00:16:45.923 "ddgst": false, 00:16:45.923 "multipath": "disable" 00:16:45.923 } 00:16:45.923 } 00:16:45.923 Got JSON-RPC error response 00:16:45.923 GoRPCClient: error on JSON-RPC call 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:45.923 2024/07/15 16:02:39 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:16:45.923 request: 00:16:45.923 { 00:16:45.923 "method": "bdev_nvme_attach_controller", 00:16:45.923 "params": { 00:16:45.923 "name": "NVMe0", 00:16:45.923 "trtype": "tcp", 00:16:45.923 "traddr": "10.0.0.2", 00:16:45.923 "adrfam": "ipv4", 00:16:45.923 "trsvcid": "4420", 00:16:45.923 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:45.923 "hostaddr": "10.0.0.2", 00:16:45.923 "hostsvcid": "60000", 00:16:45.923 "prchk_reftag": false, 00:16:45.923 "prchk_guard": false, 00:16:45.923 "hdgst": false, 00:16:45.923 "ddgst": false, 00:16:45.923 "multipath": "failover" 00:16:45.923 } 00:16:45.923 } 00:16:45.923 Got JSON-RPC error response 00:16:45.923 GoRPCClient: error on JSON-RPC call 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:45.923 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.923 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:46.181 00:16:46.181 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.181 16:02:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:16:46.181 16:02:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:46.181 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.182 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:46.182 16:02:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.182 16:02:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:16:46.182 16:02:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:47.115 0 00:16:47.115 16:02:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:16:47.115 16:02:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.115 16:02:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:47.115 16:02:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.115 16:02:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 86475 00:16:47.115 16:02:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 86475 ']' 00:16:47.115 16:02:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 86475 00:16:47.115 16:02:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:16:47.115 16:02:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:47.115 16:02:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86475 00:16:47.373 killing process with pid 86475 00:16:47.373 16:02:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:47.373 16:02:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:47.373 16:02:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86475' 00:16:47.373 16:02:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 86475 00:16:47.373 16:02:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 86475 00:16:47.631 16:02:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:47.631 16:02:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.631 16:02:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:47.632 16:02:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.632 16:02:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:47.632 16:02:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.632 16:02:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:47.632 16:02:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.632 16:02:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:16:47.632 16:02:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:47.632 16:02:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:16:47.632 16:02:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:16:47.632 16:02:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:16:47.632 16:02:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:16:47.632 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:16:47.632 [2024-07-15 16:02:38.284853] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:16:47.632 [2024-07-15 16:02:38.285057] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86475 ] 00:16:47.632 [2024-07-15 16:02:38.442397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.632 [2024-07-15 16:02:38.591616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.632 [2024-07-15 16:02:39.646074] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 0f265809-5a3a-480c-9e37-073638bfc194 already exists 00:16:47.632 [2024-07-15 16:02:39.646140] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:0f265809-5a3a-480c-9e37-073638bfc194 alias for bdev NVMe1n1 00:16:47.632 [2024-07-15 16:02:39.646159] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:16:47.632 Running I/O for 1 seconds... 00:16:47.632 00:16:47.632 Latency(us) 00:16:47.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.632 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:16:47.632 NVMe0n1 : 1.01 19398.65 75.78 0.00 0.00 6588.33 3842.79 15609.48 00:16:47.632 =================================================================================================================== 00:16:47.632 Total : 19398.65 75.78 0.00 0.00 6588.33 3842.79 15609.48 00:16:47.632 Received shutdown signal, test time was about 1.000000 seconds 00:16:47.632 00:16:47.632 Latency(us) 00:16:47.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.632 =================================================================================================================== 00:16:47.632 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:47.632 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:16:47.632 16:02:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:47.632 16:02:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:16:47.632 16:02:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:16:47.632 16:02:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:47.632 16:02:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:16:47.632 16:02:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:47.632 16:02:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:16:47.632 16:02:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:47.632 16:02:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:47.632 rmmod nvme_tcp 00:16:47.632 rmmod nvme_fabrics 00:16:47.632 rmmod nvme_keyring 00:16:47.632 16:02:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:47.632 16:02:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:16:47.632 16:02:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:16:47.632 16:02:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 86423 ']' 00:16:47.632 16:02:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 86423 00:16:47.632 16:02:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 86423 ']' 00:16:47.632 16:02:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 86423 00:16:47.632 16:02:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:16:47.632 16:02:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:47.632 16:02:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86423 00:16:47.632 16:02:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:47.632 16:02:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:47.632 killing process with pid 86423 00:16:47.632 16:02:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86423' 00:16:47.632 16:02:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 86423 00:16:47.632 16:02:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 86423 00:16:47.890 16:02:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:47.890 16:02:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:47.890 16:02:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:47.890 16:02:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:47.890 16:02:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:47.890 16:02:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.890 16:02:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:47.891 16:02:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.891 16:02:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:47.891 00:16:47.891 real 0m5.144s 00:16:47.891 user 0m16.348s 00:16:47.891 sys 0m1.095s 00:16:47.891 16:02:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:47.891 16:02:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:47.891 ************************************ 00:16:47.891 END TEST nvmf_multicontroller 00:16:47.891 ************************************ 00:16:48.149 16:02:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:48.149 16:02:41 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:16:48.149 16:02:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:48.149 16:02:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:48.149 16:02:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:48.149 ************************************ 00:16:48.149 START TEST nvmf_aer 00:16:48.149 ************************************ 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:16:48.149 * Looking for test storage... 00:16:48.149 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:48.149 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:48.150 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:48.150 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:48.150 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:48.150 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.150 16:02:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:48.150 16:02:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:48.150 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:48.150 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:48.150 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:48.150 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:48.150 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:48.150 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:48.150 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:48.150 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:48.150 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:48.150 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:48.150 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:48.150 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:48.150 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:48.150 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:48.150 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:48.150 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:48.150 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:48.150 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:48.150 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:48.150 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:48.150 Cannot find device "nvmf_tgt_br" 00:16:48.150 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # true 00:16:48.150 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:48.150 Cannot find device "nvmf_tgt_br2" 00:16:48.150 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # true 00:16:48.150 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:48.150 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:48.150 Cannot find device "nvmf_tgt_br" 00:16:48.150 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # true 00:16:48.150 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:48.150 Cannot find device "nvmf_tgt_br2" 00:16:48.150 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # true 00:16:48.150 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:48.150 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:48.408 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:48.408 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:48.408 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # true 00:16:48.408 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:48.408 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:48.408 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # true 00:16:48.408 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:48.408 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:48.408 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:48.408 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:48.408 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:48.408 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:48.408 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:48.408 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:48.408 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:48.408 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:48.408 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:48.408 16:02:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:48.408 16:02:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:48.408 16:02:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:48.408 16:02:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:48.408 16:02:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:48.408 16:02:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:48.408 16:02:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:48.408 16:02:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:48.408 16:02:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:48.408 16:02:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:48.408 16:02:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:48.408 16:02:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:48.408 16:02:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:48.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:48.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:16:48.408 00:16:48.408 --- 10.0.0.2 ping statistics --- 00:16:48.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.408 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:16:48.408 16:02:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:48.408 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:48.408 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:16:48.408 00:16:48.408 --- 10.0.0.3 ping statistics --- 00:16:48.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.408 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:16:48.408 16:02:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:48.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:48.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:16:48.408 00:16:48.408 --- 10.0.0.1 ping statistics --- 00:16:48.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.408 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:16:48.408 16:02:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:48.408 16:02:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@433 -- # return 0 00:16:48.408 16:02:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:48.408 16:02:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:48.408 16:02:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:48.408 16:02:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:48.408 16:02:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:48.408 16:02:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:48.408 16:02:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:48.408 16:02:42 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:16:48.409 16:02:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:48.409 16:02:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:48.409 16:02:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:48.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.409 16:02:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=86735 00:16:48.409 16:02:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 86735 00:16:48.409 16:02:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 86735 ']' 00:16:48.409 16:02:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:48.409 16:02:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.409 16:02:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:48.409 16:02:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.409 16:02:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:48.409 16:02:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:48.667 [2024-07-15 16:02:42.181245] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:16:48.667 [2024-07-15 16:02:42.181344] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.667 [2024-07-15 16:02:42.321444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:48.925 [2024-07-15 16:02:42.455167] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:48.925 [2024-07-15 16:02:42.455505] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:48.925 [2024-07-15 16:02:42.455675] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:48.925 [2024-07-15 16:02:42.455824] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:48.925 [2024-07-15 16:02:42.455866] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:48.925 [2024-07-15 16:02:42.456095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.925 [2024-07-15 16:02:42.456219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:48.925 [2024-07-15 16:02:42.456329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:48.925 [2024-07-15 16:02:42.456331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.490 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:49.490 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:16:49.490 16:02:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:49.490 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:49.490 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:49.763 16:02:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:49.763 16:02:43 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:49.763 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.763 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:49.763 [2024-07-15 16:02:43.256800] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:49.763 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.763 16:02:43 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:16:49.763 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.763 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:49.763 Malloc0 00:16:49.763 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.763 16:02:43 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:16:49.763 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.763 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:49.763 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.763 16:02:43 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:49.763 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.763 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:49.763 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.763 16:02:43 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:49.763 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.763 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:49.763 [2024-07-15 16:02:43.327858] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.763 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.763 16:02:43 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:16:49.763 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.763 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:49.763 [ 00:16:49.763 { 00:16:49.763 "allow_any_host": true, 00:16:49.763 "hosts": [], 00:16:49.763 "listen_addresses": [], 00:16:49.763 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:49.763 "subtype": "Discovery" 00:16:49.763 }, 00:16:49.763 { 00:16:49.763 "allow_any_host": true, 00:16:49.763 "hosts": [], 00:16:49.763 "listen_addresses": [ 00:16:49.763 { 00:16:49.763 "adrfam": "IPv4", 00:16:49.763 "traddr": "10.0.0.2", 00:16:49.763 "trsvcid": "4420", 00:16:49.763 "trtype": "TCP" 00:16:49.763 } 00:16:49.763 ], 00:16:49.763 "max_cntlid": 65519, 00:16:49.763 "max_namespaces": 2, 00:16:49.763 "min_cntlid": 1, 00:16:49.763 "model_number": "SPDK bdev Controller", 00:16:49.763 "namespaces": [ 00:16:49.763 { 00:16:49.763 "bdev_name": "Malloc0", 00:16:49.763 "name": "Malloc0", 00:16:49.763 "nguid": "AEDD35E3D18148FF90A8947E00E33238", 00:16:49.763 "nsid": 1, 00:16:49.763 "uuid": "aedd35e3-d181-48ff-90a8-947e00e33238" 00:16:49.763 } 00:16:49.763 ], 00:16:49.763 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:49.763 "serial_number": "SPDK00000000000001", 00:16:49.763 "subtype": "NVMe" 00:16:49.763 } 00:16:49.763 ] 00:16:49.763 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.763 16:02:43 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:49.763 16:02:43 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:16:49.763 16:02:43 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=86789 00:16:49.763 16:02:43 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:16:49.763 16:02:43 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:16:49.763 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:16:49.763 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:49.763 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:16:49.763 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:16:49.763 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:16:49.763 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:49.763 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:16:49.763 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:16:49.763 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:16:50.020 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:50.020 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:50.020 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:16:50.020 16:02:43 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:16:50.020 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.020 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:50.020 Malloc1 00:16:50.020 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.020 16:02:43 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:16:50.020 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.020 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:50.020 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.020 16:02:43 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:16:50.020 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.020 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:50.020 [ 00:16:50.020 { 00:16:50.020 "allow_any_host": true, 00:16:50.020 "hosts": [], 00:16:50.020 "listen_addresses": [], 00:16:50.020 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:50.020 "subtype": "Discovery" 00:16:50.020 }, 00:16:50.020 { 00:16:50.020 "allow_any_host": true, 00:16:50.020 "hosts": [], 00:16:50.020 "listen_addresses": [ 00:16:50.020 { 00:16:50.020 "adrfam": "IPv4", 00:16:50.020 "traddr": "10.0.0.2", 00:16:50.020 "trsvcid": "4420", 00:16:50.020 "trtype": "TCP" 00:16:50.020 } 00:16:50.020 ], 00:16:50.020 "max_cntlid": 65519, 00:16:50.020 "max_namespaces": 2, 00:16:50.020 "min_cntlid": 1, 00:16:50.020 "model_number": "SPDK bdev Controller", 00:16:50.020 "namespaces": [ 00:16:50.020 { 00:16:50.020 "bdev_name": "Malloc0", 00:16:50.020 Asynchronous Event Request test 00:16:50.020 Attaching to 10.0.0.2 00:16:50.020 Attached to 10.0.0.2 00:16:50.020 Registering asynchronous event callbacks... 00:16:50.020 Starting namespace attribute notice tests for all controllers... 00:16:50.020 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:50.020 aer_cb - Changed Namespace 00:16:50.020 Cleaning up... 00:16:50.020 "name": "Malloc0", 00:16:50.020 "nguid": "AEDD35E3D18148FF90A8947E00E33238", 00:16:50.020 "nsid": 1, 00:16:50.020 "uuid": "aedd35e3-d181-48ff-90a8-947e00e33238" 00:16:50.020 }, 00:16:50.020 { 00:16:50.020 "bdev_name": "Malloc1", 00:16:50.020 "name": "Malloc1", 00:16:50.020 "nguid": "53697A7DEC894D9AB3960719323DFA49", 00:16:50.020 "nsid": 2, 00:16:50.020 "uuid": "53697a7d-ec89-4d9a-b396-0719323dfa49" 00:16:50.020 } 00:16:50.020 ], 00:16:50.020 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:50.020 "serial_number": "SPDK00000000000001", 00:16:50.020 "subtype": "NVMe" 00:16:50.020 } 00:16:50.020 ] 00:16:50.020 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.020 16:02:43 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 86789 00:16:50.020 16:02:43 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:50.020 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.020 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:50.020 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.020 16:02:43 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:50.020 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.020 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:50.020 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.020 16:02:43 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:50.020 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.020 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:50.020 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.020 16:02:43 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:16:50.020 16:02:43 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:16:50.020 16:02:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:50.020 16:02:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:16:50.276 16:02:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:50.276 16:02:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:16:50.276 16:02:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:50.276 16:02:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:50.276 rmmod nvme_tcp 00:16:50.276 rmmod nvme_fabrics 00:16:50.276 rmmod nvme_keyring 00:16:50.276 16:02:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:50.276 16:02:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:16:50.276 16:02:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:16:50.276 16:02:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 86735 ']' 00:16:50.276 16:02:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 86735 00:16:50.276 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 86735 ']' 00:16:50.276 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 86735 00:16:50.276 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:16:50.276 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:50.276 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86735 00:16:50.276 killing process with pid 86735 00:16:50.276 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:50.276 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:50.276 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86735' 00:16:50.276 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 86735 00:16:50.276 16:02:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 86735 00:16:50.539 16:02:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:50.539 16:02:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:50.539 16:02:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:50.539 16:02:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:50.539 16:02:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:50.539 16:02:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.539 16:02:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:50.539 16:02:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.539 16:02:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:50.539 00:16:50.539 real 0m2.438s 00:16:50.539 user 0m6.582s 00:16:50.539 sys 0m0.676s 00:16:50.539 ************************************ 00:16:50.539 END TEST nvmf_aer 00:16:50.539 ************************************ 00:16:50.539 16:02:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:50.539 16:02:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:50.539 16:02:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:50.539 16:02:44 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:16:50.539 16:02:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:50.539 16:02:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:50.539 16:02:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:50.539 ************************************ 00:16:50.539 START TEST nvmf_async_init 00:16:50.539 ************************************ 00:16:50.539 16:02:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:16:50.539 * Looking for test storage... 00:16:50.539 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:50.539 16:02:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:50.539 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:16:50.539 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:50.539 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:50.539 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:50.539 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:50.539 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:50.539 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:50.539 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:50.539 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:50.539 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:50.539 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:50.539 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:16:50.539 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:16:50.539 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:50.539 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:50.539 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:50.539 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:50.539 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:50.539 16:02:44 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.539 16:02:44 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.539 16:02:44 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.539 16:02:44 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=6acb40b7dd3a4f0d806f3a1dd746670b 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:50.540 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:50.796 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:50.796 Cannot find device "nvmf_tgt_br" 00:16:50.796 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # true 00:16:50.796 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:50.797 Cannot find device "nvmf_tgt_br2" 00:16:50.797 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # true 00:16:50.797 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:50.797 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:50.797 Cannot find device "nvmf_tgt_br" 00:16:50.797 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # true 00:16:50.797 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:50.797 Cannot find device "nvmf_tgt_br2" 00:16:50.797 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # true 00:16:50.797 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:50.797 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:50.797 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:50.797 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:50.797 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:16:50.797 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:50.797 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:50.797 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:16:50.797 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:50.797 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:50.797 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:50.797 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:50.797 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:50.797 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:50.797 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:50.797 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:50.797 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:50.797 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:50.797 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:50.797 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:50.797 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:50.797 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:50.797 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:50.797 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:50.797 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:51.053 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:51.053 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:51.053 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:51.053 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:51.053 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:51.053 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:51.053 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:51.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:51.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:16:51.053 00:16:51.053 --- 10.0.0.2 ping statistics --- 00:16:51.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.053 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:16:51.054 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:51.054 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:51.054 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:16:51.054 00:16:51.054 --- 10.0.0.3 ping statistics --- 00:16:51.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.054 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:16:51.054 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:51.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:51.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:16:51.054 00:16:51.054 --- 10.0.0.1 ping statistics --- 00:16:51.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.054 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:16:51.054 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:51.054 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@433 -- # return 0 00:16:51.054 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:51.054 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:51.054 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:51.054 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:51.054 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:51.054 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:51.054 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:51.054 16:02:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:16:51.054 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:51.054 16:02:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:51.054 16:02:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:51.054 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=86961 00:16:51.054 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:51.054 16:02:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 86961 00:16:51.054 16:02:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 86961 ']' 00:16:51.054 16:02:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.054 16:02:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:51.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.054 16:02:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.054 16:02:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:51.054 16:02:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:51.054 [2024-07-15 16:02:44.677927] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:16:51.054 [2024-07-15 16:02:44.678059] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.311 [2024-07-15 16:02:44.819602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.311 [2024-07-15 16:02:44.951364] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:51.311 [2024-07-15 16:02:44.951438] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:51.311 [2024-07-15 16:02:44.951453] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:51.311 [2024-07-15 16:02:44.951464] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:51.311 [2024-07-15 16:02:44.951474] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:51.311 [2024-07-15 16:02:44.951504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.241 16:02:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:52.241 16:02:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:16:52.241 16:02:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:52.241 16:02:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:52.241 16:02:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:52.241 16:02:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.241 16:02:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:16:52.241 16:02:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.241 16:02:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:52.241 [2024-07-15 16:02:45.766163] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:52.241 16:02:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.241 16:02:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:16:52.241 16:02:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.241 16:02:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:52.241 null0 00:16:52.241 16:02:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.241 16:02:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:16:52.241 16:02:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.241 16:02:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:52.241 16:02:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.241 16:02:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:16:52.241 16:02:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.241 16:02:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:52.241 16:02:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.241 16:02:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 6acb40b7dd3a4f0d806f3a1dd746670b 00:16:52.241 16:02:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.241 16:02:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:52.241 16:02:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.241 16:02:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:52.241 16:02:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.241 16:02:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:52.241 [2024-07-15 16:02:45.814266] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:52.241 16:02:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.241 16:02:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:16:52.241 16:02:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.241 16:02:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:52.500 nvme0n1 00:16:52.500 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.500 16:02:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:52.500 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.500 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:52.500 [ 00:16:52.500 { 00:16:52.500 "aliases": [ 00:16:52.500 "6acb40b7-dd3a-4f0d-806f-3a1dd746670b" 00:16:52.500 ], 00:16:52.500 "assigned_rate_limits": { 00:16:52.500 "r_mbytes_per_sec": 0, 00:16:52.500 "rw_ios_per_sec": 0, 00:16:52.500 "rw_mbytes_per_sec": 0, 00:16:52.500 "w_mbytes_per_sec": 0 00:16:52.500 }, 00:16:52.500 "block_size": 512, 00:16:52.500 "claimed": false, 00:16:52.500 "driver_specific": { 00:16:52.500 "mp_policy": "active_passive", 00:16:52.501 "nvme": [ 00:16:52.501 { 00:16:52.501 "ctrlr_data": { 00:16:52.501 "ana_reporting": false, 00:16:52.501 "cntlid": 1, 00:16:52.501 "firmware_revision": "24.09", 00:16:52.501 "model_number": "SPDK bdev Controller", 00:16:52.501 "multi_ctrlr": true, 00:16:52.501 "oacs": { 00:16:52.501 "firmware": 0, 00:16:52.501 "format": 0, 00:16:52.501 "ns_manage": 0, 00:16:52.501 "security": 0 00:16:52.501 }, 00:16:52.501 "serial_number": "00000000000000000000", 00:16:52.501 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:52.501 "vendor_id": "0x8086" 00:16:52.501 }, 00:16:52.501 "ns_data": { 00:16:52.501 "can_share": true, 00:16:52.501 "id": 1 00:16:52.501 }, 00:16:52.501 "trid": { 00:16:52.501 "adrfam": "IPv4", 00:16:52.501 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:52.501 "traddr": "10.0.0.2", 00:16:52.501 "trsvcid": "4420", 00:16:52.501 "trtype": "TCP" 00:16:52.501 }, 00:16:52.501 "vs": { 00:16:52.501 "nvme_version": "1.3" 00:16:52.501 } 00:16:52.501 } 00:16:52.501 ] 00:16:52.501 }, 00:16:52.501 "memory_domains": [ 00:16:52.501 { 00:16:52.501 "dma_device_id": "system", 00:16:52.501 "dma_device_type": 1 00:16:52.501 } 00:16:52.501 ], 00:16:52.501 "name": "nvme0n1", 00:16:52.501 "num_blocks": 2097152, 00:16:52.501 "product_name": "NVMe disk", 00:16:52.501 "supported_io_types": { 00:16:52.501 "abort": true, 00:16:52.501 "compare": true, 00:16:52.501 "compare_and_write": true, 00:16:52.501 "copy": true, 00:16:52.501 "flush": true, 00:16:52.501 "get_zone_info": false, 00:16:52.501 "nvme_admin": true, 00:16:52.501 "nvme_io": true, 00:16:52.501 "nvme_io_md": false, 00:16:52.501 "nvme_iov_md": false, 00:16:52.501 "read": true, 00:16:52.501 "reset": true, 00:16:52.501 "seek_data": false, 00:16:52.501 "seek_hole": false, 00:16:52.501 "unmap": false, 00:16:52.501 "write": true, 00:16:52.501 "write_zeroes": true, 00:16:52.501 "zcopy": false, 00:16:52.501 "zone_append": false, 00:16:52.501 "zone_management": false 00:16:52.501 }, 00:16:52.501 "uuid": "6acb40b7-dd3a-4f0d-806f-3a1dd746670b", 00:16:52.501 "zoned": false 00:16:52.501 } 00:16:52.501 ] 00:16:52.501 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.501 16:02:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:16:52.501 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.501 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:52.501 [2024-07-15 16:02:46.086211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:52.501 [2024-07-15 16:02:46.086315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa6b70 (9): Bad file descriptor 00:16:52.501 [2024-07-15 16:02:46.218141] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:52.501 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.501 16:02:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:52.501 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.501 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:52.759 [ 00:16:52.759 { 00:16:52.759 "aliases": [ 00:16:52.759 "6acb40b7-dd3a-4f0d-806f-3a1dd746670b" 00:16:52.759 ], 00:16:52.759 "assigned_rate_limits": { 00:16:52.759 "r_mbytes_per_sec": 0, 00:16:52.759 "rw_ios_per_sec": 0, 00:16:52.759 "rw_mbytes_per_sec": 0, 00:16:52.759 "w_mbytes_per_sec": 0 00:16:52.759 }, 00:16:52.759 "block_size": 512, 00:16:52.759 "claimed": false, 00:16:52.759 "driver_specific": { 00:16:52.759 "mp_policy": "active_passive", 00:16:52.759 "nvme": [ 00:16:52.759 { 00:16:52.759 "ctrlr_data": { 00:16:52.759 "ana_reporting": false, 00:16:52.759 "cntlid": 2, 00:16:52.759 "firmware_revision": "24.09", 00:16:52.759 "model_number": "SPDK bdev Controller", 00:16:52.759 "multi_ctrlr": true, 00:16:52.759 "oacs": { 00:16:52.759 "firmware": 0, 00:16:52.759 "format": 0, 00:16:52.759 "ns_manage": 0, 00:16:52.759 "security": 0 00:16:52.759 }, 00:16:52.759 "serial_number": "00000000000000000000", 00:16:52.759 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:52.759 "vendor_id": "0x8086" 00:16:52.759 }, 00:16:52.759 "ns_data": { 00:16:52.759 "can_share": true, 00:16:52.759 "id": 1 00:16:52.759 }, 00:16:52.759 "trid": { 00:16:52.759 "adrfam": "IPv4", 00:16:52.759 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:52.759 "traddr": "10.0.0.2", 00:16:52.759 "trsvcid": "4420", 00:16:52.759 "trtype": "TCP" 00:16:52.759 }, 00:16:52.759 "vs": { 00:16:52.759 "nvme_version": "1.3" 00:16:52.759 } 00:16:52.759 } 00:16:52.759 ] 00:16:52.759 }, 00:16:52.759 "memory_domains": [ 00:16:52.759 { 00:16:52.759 "dma_device_id": "system", 00:16:52.759 "dma_device_type": 1 00:16:52.759 } 00:16:52.759 ], 00:16:52.759 "name": "nvme0n1", 00:16:52.759 "num_blocks": 2097152, 00:16:52.759 "product_name": "NVMe disk", 00:16:52.759 "supported_io_types": { 00:16:52.759 "abort": true, 00:16:52.759 "compare": true, 00:16:52.759 "compare_and_write": true, 00:16:52.759 "copy": true, 00:16:52.759 "flush": true, 00:16:52.759 "get_zone_info": false, 00:16:52.759 "nvme_admin": true, 00:16:52.759 "nvme_io": true, 00:16:52.759 "nvme_io_md": false, 00:16:52.759 "nvme_iov_md": false, 00:16:52.759 "read": true, 00:16:52.759 "reset": true, 00:16:52.759 "seek_data": false, 00:16:52.759 "seek_hole": false, 00:16:52.759 "unmap": false, 00:16:52.759 "write": true, 00:16:52.759 "write_zeroes": true, 00:16:52.759 "zcopy": false, 00:16:52.759 "zone_append": false, 00:16:52.759 "zone_management": false 00:16:52.759 }, 00:16:52.759 "uuid": "6acb40b7-dd3a-4f0d-806f-3a1dd746670b", 00:16:52.759 "zoned": false 00:16:52.759 } 00:16:52.759 ] 00:16:52.759 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.759 16:02:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.759 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.759 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:52.759 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.759 16:02:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:16:52.759 16:02:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.PfRztRfnHX 00:16:52.759 16:02:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:52.759 16:02:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.PfRztRfnHX 00:16:52.759 16:02:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:16:52.759 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.759 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:52.759 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.759 16:02:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:16:52.760 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.760 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:52.760 [2024-07-15 16:02:46.290422] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:52.760 [2024-07-15 16:02:46.290624] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:52.760 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.760 16:02:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PfRztRfnHX 00:16:52.760 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.760 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:52.760 [2024-07-15 16:02:46.298422] tcp.c:3710:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:52.760 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.760 16:02:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PfRztRfnHX 00:16:52.760 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.760 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:52.760 [2024-07-15 16:02:46.306464] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:52.760 [2024-07-15 16:02:46.306534] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:52.760 nvme0n1 00:16:52.760 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.760 16:02:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:52.760 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.760 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:52.760 [ 00:16:52.760 { 00:16:52.760 "aliases": [ 00:16:52.760 "6acb40b7-dd3a-4f0d-806f-3a1dd746670b" 00:16:52.760 ], 00:16:52.760 "assigned_rate_limits": { 00:16:52.760 "r_mbytes_per_sec": 0, 00:16:52.760 "rw_ios_per_sec": 0, 00:16:52.760 "rw_mbytes_per_sec": 0, 00:16:52.760 "w_mbytes_per_sec": 0 00:16:52.760 }, 00:16:52.760 "block_size": 512, 00:16:52.760 "claimed": false, 00:16:52.760 "driver_specific": { 00:16:52.760 "mp_policy": "active_passive", 00:16:52.760 "nvme": [ 00:16:52.760 { 00:16:52.760 "ctrlr_data": { 00:16:52.760 "ana_reporting": false, 00:16:52.760 "cntlid": 3, 00:16:52.760 "firmware_revision": "24.09", 00:16:52.760 "model_number": "SPDK bdev Controller", 00:16:52.760 "multi_ctrlr": true, 00:16:52.760 "oacs": { 00:16:52.760 "firmware": 0, 00:16:52.760 "format": 0, 00:16:52.760 "ns_manage": 0, 00:16:52.760 "security": 0 00:16:52.760 }, 00:16:52.760 "serial_number": "00000000000000000000", 00:16:52.760 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:52.760 "vendor_id": "0x8086" 00:16:52.760 }, 00:16:52.760 "ns_data": { 00:16:52.760 "can_share": true, 00:16:52.760 "id": 1 00:16:52.760 }, 00:16:52.760 "trid": { 00:16:52.760 "adrfam": "IPv4", 00:16:52.760 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:52.760 "traddr": "10.0.0.2", 00:16:52.760 "trsvcid": "4421", 00:16:52.760 "trtype": "TCP" 00:16:52.760 }, 00:16:52.760 "vs": { 00:16:52.760 "nvme_version": "1.3" 00:16:52.760 } 00:16:52.760 } 00:16:52.760 ] 00:16:52.760 }, 00:16:52.760 "memory_domains": [ 00:16:52.760 { 00:16:52.760 "dma_device_id": "system", 00:16:52.760 "dma_device_type": 1 00:16:52.760 } 00:16:52.760 ], 00:16:52.760 "name": "nvme0n1", 00:16:52.760 "num_blocks": 2097152, 00:16:52.760 "product_name": "NVMe disk", 00:16:52.760 "supported_io_types": { 00:16:52.760 "abort": true, 00:16:52.760 "compare": true, 00:16:52.760 "compare_and_write": true, 00:16:52.760 "copy": true, 00:16:52.760 "flush": true, 00:16:52.760 "get_zone_info": false, 00:16:52.760 "nvme_admin": true, 00:16:52.760 "nvme_io": true, 00:16:52.760 "nvme_io_md": false, 00:16:52.760 "nvme_iov_md": false, 00:16:52.760 "read": true, 00:16:52.760 "reset": true, 00:16:52.760 "seek_data": false, 00:16:52.760 "seek_hole": false, 00:16:52.760 "unmap": false, 00:16:52.760 "write": true, 00:16:52.760 "write_zeroes": true, 00:16:52.760 "zcopy": false, 00:16:52.760 "zone_append": false, 00:16:52.760 "zone_management": false 00:16:52.760 }, 00:16:52.760 "uuid": "6acb40b7-dd3a-4f0d-806f-3a1dd746670b", 00:16:52.760 "zoned": false 00:16:52.760 } 00:16:52.760 ] 00:16:52.760 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.760 16:02:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.760 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.760 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:52.760 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.760 16:02:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.PfRztRfnHX 00:16:52.760 16:02:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:16:52.760 16:02:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:16:52.760 16:02:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:52.760 16:02:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:16:52.760 16:02:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:52.760 16:02:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:16:52.760 16:02:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:52.760 16:02:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:52.760 rmmod nvme_tcp 00:16:52.760 rmmod nvme_fabrics 00:16:53.018 rmmod nvme_keyring 00:16:53.018 16:02:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:53.018 16:02:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:16:53.018 16:02:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:16:53.018 16:02:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 86961 ']' 00:16:53.018 16:02:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 86961 00:16:53.018 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 86961 ']' 00:16:53.018 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 86961 00:16:53.018 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:16:53.018 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:53.018 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86961 00:16:53.018 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:53.018 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:53.018 killing process with pid 86961 00:16:53.018 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86961' 00:16:53.018 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 86961 00:16:53.018 [2024-07-15 16:02:46.555269] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:53.018 [2024-07-15 16:02:46.555316] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:53.018 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 86961 00:16:53.276 16:02:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:53.276 16:02:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:53.276 16:02:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:53.276 16:02:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:53.276 16:02:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:53.276 16:02:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.276 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:53.276 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.276 16:02:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:53.276 00:16:53.276 real 0m2.677s 00:16:53.276 user 0m2.523s 00:16:53.276 sys 0m0.659s 00:16:53.276 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:53.276 16:02:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:53.276 ************************************ 00:16:53.276 END TEST nvmf_async_init 00:16:53.276 ************************************ 00:16:53.276 16:02:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:53.276 16:02:46 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:16:53.277 16:02:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:53.277 16:02:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:53.277 16:02:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:53.277 ************************************ 00:16:53.277 START TEST dma 00:16:53.277 ************************************ 00:16:53.277 16:02:46 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:16:53.277 * Looking for test storage... 00:16:53.277 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:53.277 16:02:46 nvmf_tcp.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:53.277 16:02:46 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:16:53.277 16:02:46 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:53.277 16:02:46 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:53.277 16:02:46 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:53.277 16:02:46 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:53.277 16:02:46 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:53.277 16:02:46 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:53.277 16:02:46 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:53.277 16:02:46 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:53.277 16:02:46 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:53.277 16:02:46 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:53.277 16:02:46 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:16:53.277 16:02:46 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:16:53.277 16:02:46 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:53.277 16:02:46 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:53.277 16:02:46 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:53.277 16:02:46 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:53.277 16:02:46 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:53.277 16:02:46 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:53.277 16:02:46 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:53.277 16:02:46 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:53.277 16:02:46 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.277 16:02:46 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.277 16:02:46 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.277 16:02:46 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:16:53.277 16:02:46 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.277 16:02:46 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:16:53.277 16:02:46 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:53.277 16:02:46 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:53.277 16:02:46 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:53.277 16:02:46 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:53.277 16:02:46 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:53.277 16:02:46 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:53.277 16:02:46 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:53.277 16:02:46 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:53.277 16:02:46 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:16:53.277 16:02:46 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:16:53.277 00:16:53.277 real 0m0.102s 00:16:53.277 user 0m0.055s 00:16:53.277 sys 0m0.054s 00:16:53.277 16:02:46 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:53.277 16:02:46 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:16:53.277 ************************************ 00:16:53.277 END TEST dma 00:16:53.277 ************************************ 00:16:53.277 16:02:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:53.277 16:02:47 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:53.277 16:02:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:53.277 16:02:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:53.536 16:02:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:53.536 ************************************ 00:16:53.536 START TEST nvmf_identify 00:16:53.536 ************************************ 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:53.536 * Looking for test storage... 00:16:53.536 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:53.536 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:53.537 Cannot find device "nvmf_tgt_br" 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:53.537 Cannot find device "nvmf_tgt_br2" 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:53.537 Cannot find device "nvmf_tgt_br" 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:53.537 Cannot find device "nvmf_tgt_br2" 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:53.537 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:53.537 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:53.537 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:53.795 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:53.795 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:53.795 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:53.796 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:53.796 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:16:53.796 00:16:53.796 --- 10.0.0.2 ping statistics --- 00:16:53.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.796 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:53.796 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:53.796 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:16:53.796 00:16:53.796 --- 10.0.0.3 ping statistics --- 00:16:53.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.796 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:53.796 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:53.796 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:16:53.796 00:16:53.796 --- 10.0.0.1 ping statistics --- 00:16:53.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.796 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=87236 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 87236 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 87236 ']' 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:53.796 16:02:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:54.054 [2024-07-15 16:02:47.544244] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:16:54.054 [2024-07-15 16:02:47.544354] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:54.054 [2024-07-15 16:02:47.689598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:54.311 [2024-07-15 16:02:47.822070] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:54.311 [2024-07-15 16:02:47.822123] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:54.311 [2024-07-15 16:02:47.822137] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:54.311 [2024-07-15 16:02:47.822148] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:54.311 [2024-07-15 16:02:47.822157] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:54.311 [2024-07-15 16:02:47.823185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.311 [2024-07-15 16:02:47.823264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:54.311 [2024-07-15 16:02:47.823338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:54.311 [2024-07-15 16:02:47.823344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.877 16:02:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:54.877 16:02:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:16:54.877 16:02:48 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:54.877 16:02:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.877 16:02:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:54.877 [2024-07-15 16:02:48.555342] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:54.877 16:02:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.877 16:02:48 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:16:54.877 16:02:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:54.877 16:02:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:55.135 16:02:48 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:55.135 16:02:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.135 16:02:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:55.135 Malloc0 00:16:55.135 16:02:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.135 16:02:48 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:55.135 16:02:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.135 16:02:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:55.135 16:02:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.135 16:02:48 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:16:55.135 16:02:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.135 16:02:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:55.135 16:02:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.135 16:02:48 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:55.135 16:02:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.135 16:02:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:55.135 [2024-07-15 16:02:48.660690] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:55.135 16:02:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.135 16:02:48 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:55.135 16:02:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.135 16:02:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:55.135 16:02:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.135 16:02:48 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:16:55.135 16:02:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.135 16:02:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:55.135 [ 00:16:55.135 { 00:16:55.135 "allow_any_host": true, 00:16:55.135 "hosts": [], 00:16:55.135 "listen_addresses": [ 00:16:55.135 { 00:16:55.135 "adrfam": "IPv4", 00:16:55.135 "traddr": "10.0.0.2", 00:16:55.135 "trsvcid": "4420", 00:16:55.135 "trtype": "TCP" 00:16:55.135 } 00:16:55.135 ], 00:16:55.135 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:55.135 "subtype": "Discovery" 00:16:55.135 }, 00:16:55.135 { 00:16:55.135 "allow_any_host": true, 00:16:55.135 "hosts": [], 00:16:55.135 "listen_addresses": [ 00:16:55.135 { 00:16:55.135 "adrfam": "IPv4", 00:16:55.135 "traddr": "10.0.0.2", 00:16:55.135 "trsvcid": "4420", 00:16:55.135 "trtype": "TCP" 00:16:55.135 } 00:16:55.135 ], 00:16:55.135 "max_cntlid": 65519, 00:16:55.135 "max_namespaces": 32, 00:16:55.135 "min_cntlid": 1, 00:16:55.135 "model_number": "SPDK bdev Controller", 00:16:55.135 "namespaces": [ 00:16:55.135 { 00:16:55.135 "bdev_name": "Malloc0", 00:16:55.135 "eui64": "ABCDEF0123456789", 00:16:55.135 "name": "Malloc0", 00:16:55.135 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:16:55.135 "nsid": 1, 00:16:55.135 "uuid": "38122000-adc7-461c-bfa0-39a97a67270c" 00:16:55.135 } 00:16:55.135 ], 00:16:55.135 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:55.135 "serial_number": "SPDK00000000000001", 00:16:55.135 "subtype": "NVMe" 00:16:55.135 } 00:16:55.135 ] 00:16:55.135 16:02:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.135 16:02:48 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:16:55.135 [2024-07-15 16:02:48.715480] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:16:55.136 [2024-07-15 16:02:48.715547] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87289 ] 00:16:55.136 [2024-07-15 16:02:48.857119] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:16:55.136 [2024-07-15 16:02:48.857196] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:55.136 [2024-07-15 16:02:48.857204] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:55.136 [2024-07-15 16:02:48.857217] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:55.136 [2024-07-15 16:02:48.857225] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:55.136 [2024-07-15 16:02:48.857366] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:16:55.136 [2024-07-15 16:02:48.857422] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1bbea60 0 00:16:55.136 [2024-07-15 16:02:48.861980] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:55.136 [2024-07-15 16:02:48.862006] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:55.136 [2024-07-15 16:02:48.862013] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:55.136 [2024-07-15 16:02:48.862017] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:55.136 [2024-07-15 16:02:48.862067] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.136 [2024-07-15 16:02:48.862075] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.136 [2024-07-15 16:02:48.862080] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bbea60) 00:16:55.136 [2024-07-15 16:02:48.862096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:55.136 [2024-07-15 16:02:48.862127] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01840, cid 0, qid 0 00:16:55.399 [2024-07-15 16:02:48.869973] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.399 [2024-07-15 16:02:48.869997] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.399 [2024-07-15 16:02:48.870003] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.399 [2024-07-15 16:02:48.870008] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01840) on tqpair=0x1bbea60 00:16:55.399 [2024-07-15 16:02:48.870024] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:55.400 [2024-07-15 16:02:48.870033] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:16:55.400 [2024-07-15 16:02:48.870040] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:16:55.400 [2024-07-15 16:02:48.870059] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.400 [2024-07-15 16:02:48.870065] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.400 [2024-07-15 16:02:48.870069] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bbea60) 00:16:55.400 [2024-07-15 16:02:48.870080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.400 [2024-07-15 16:02:48.870109] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01840, cid 0, qid 0 00:16:55.400 [2024-07-15 16:02:48.870190] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.400 [2024-07-15 16:02:48.870197] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.400 [2024-07-15 16:02:48.870202] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.400 [2024-07-15 16:02:48.870207] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01840) on tqpair=0x1bbea60 00:16:55.400 [2024-07-15 16:02:48.870213] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:16:55.400 [2024-07-15 16:02:48.870222] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:16:55.400 [2024-07-15 16:02:48.870231] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.400 [2024-07-15 16:02:48.870236] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.400 [2024-07-15 16:02:48.870240] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bbea60) 00:16:55.400 [2024-07-15 16:02:48.870248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.400 [2024-07-15 16:02:48.870268] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01840, cid 0, qid 0 00:16:55.400 [2024-07-15 16:02:48.870324] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.400 [2024-07-15 16:02:48.870331] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.400 [2024-07-15 16:02:48.870335] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.400 [2024-07-15 16:02:48.870340] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01840) on tqpair=0x1bbea60 00:16:55.400 [2024-07-15 16:02:48.870347] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:16:55.400 [2024-07-15 16:02:48.870356] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:16:55.400 [2024-07-15 16:02:48.870364] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.400 [2024-07-15 16:02:48.870368] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.400 [2024-07-15 16:02:48.870373] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bbea60) 00:16:55.400 [2024-07-15 16:02:48.870390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.400 [2024-07-15 16:02:48.870409] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01840, cid 0, qid 0 00:16:55.400 [2024-07-15 16:02:48.870465] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.400 [2024-07-15 16:02:48.870480] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.400 [2024-07-15 16:02:48.870485] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.400 [2024-07-15 16:02:48.870490] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01840) on tqpair=0x1bbea60 00:16:55.400 [2024-07-15 16:02:48.870497] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:55.400 [2024-07-15 16:02:48.870508] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.400 [2024-07-15 16:02:48.870514] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.400 [2024-07-15 16:02:48.870518] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bbea60) 00:16:55.400 [2024-07-15 16:02:48.870526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.400 [2024-07-15 16:02:48.870545] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01840, cid 0, qid 0 00:16:55.400 [2024-07-15 16:02:48.870597] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.400 [2024-07-15 16:02:48.870604] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.400 [2024-07-15 16:02:48.870608] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.400 [2024-07-15 16:02:48.870613] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01840) on tqpair=0x1bbea60 00:16:55.400 [2024-07-15 16:02:48.870619] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:16:55.400 [2024-07-15 16:02:48.870625] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:16:55.400 [2024-07-15 16:02:48.870634] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:55.400 [2024-07-15 16:02:48.870741] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:16:55.400 [2024-07-15 16:02:48.870754] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:55.400 [2024-07-15 16:02:48.870764] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.400 [2024-07-15 16:02:48.870770] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.400 [2024-07-15 16:02:48.870774] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bbea60) 00:16:55.400 [2024-07-15 16:02:48.870793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.400 [2024-07-15 16:02:48.870813] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01840, cid 0, qid 0 00:16:55.400 [2024-07-15 16:02:48.870870] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.400 [2024-07-15 16:02:48.870877] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.400 [2024-07-15 16:02:48.870881] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.400 [2024-07-15 16:02:48.870886] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01840) on tqpair=0x1bbea60 00:16:55.400 [2024-07-15 16:02:48.870892] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:55.400 [2024-07-15 16:02:48.870903] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.400 [2024-07-15 16:02:48.870908] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.400 [2024-07-15 16:02:48.870912] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bbea60) 00:16:55.400 [2024-07-15 16:02:48.870920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.400 [2024-07-15 16:02:48.870939] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01840, cid 0, qid 0 00:16:55.400 [2024-07-15 16:02:48.871010] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.400 [2024-07-15 16:02:48.871026] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.400 [2024-07-15 16:02:48.871031] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.400 [2024-07-15 16:02:48.871035] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01840) on tqpair=0x1bbea60 00:16:55.400 [2024-07-15 16:02:48.871041] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:55.400 [2024-07-15 16:02:48.871047] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:16:55.400 [2024-07-15 16:02:48.871056] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:16:55.401 [2024-07-15 16:02:48.871067] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:16:55.401 [2024-07-15 16:02:48.871091] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.401 [2024-07-15 16:02:48.871096] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bbea60) 00:16:55.401 [2024-07-15 16:02:48.871105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.401 [2024-07-15 16:02:48.871127] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01840, cid 0, qid 0 00:16:55.401 [2024-07-15 16:02:48.871224] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:55.401 [2024-07-15 16:02:48.871232] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:55.401 [2024-07-15 16:02:48.871237] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:55.401 [2024-07-15 16:02:48.871242] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bbea60): datao=0, datal=4096, cccid=0 00:16:55.401 [2024-07-15 16:02:48.871247] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c01840) on tqpair(0x1bbea60): expected_datao=0, payload_size=4096 00:16:55.401 [2024-07-15 16:02:48.871253] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.401 [2024-07-15 16:02:48.871262] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:55.401 [2024-07-15 16:02:48.871267] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:55.401 [2024-07-15 16:02:48.871277] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.401 [2024-07-15 16:02:48.871283] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.401 [2024-07-15 16:02:48.871287] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.401 [2024-07-15 16:02:48.871292] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01840) on tqpair=0x1bbea60 00:16:55.401 [2024-07-15 16:02:48.871301] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:16:55.401 [2024-07-15 16:02:48.871307] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:16:55.401 [2024-07-15 16:02:48.871312] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:16:55.401 [2024-07-15 16:02:48.871318] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:16:55.401 [2024-07-15 16:02:48.871323] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:16:55.401 [2024-07-15 16:02:48.871329] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:16:55.401 [2024-07-15 16:02:48.871338] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:16:55.401 [2024-07-15 16:02:48.871346] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.401 [2024-07-15 16:02:48.871351] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.401 [2024-07-15 16:02:48.871355] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bbea60) 00:16:55.401 [2024-07-15 16:02:48.871364] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:55.401 [2024-07-15 16:02:48.871385] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01840, cid 0, qid 0 00:16:55.401 [2024-07-15 16:02:48.871450] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.401 [2024-07-15 16:02:48.871458] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.401 [2024-07-15 16:02:48.871462] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.401 [2024-07-15 16:02:48.871466] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01840) on tqpair=0x1bbea60 00:16:55.401 [2024-07-15 16:02:48.871475] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.401 [2024-07-15 16:02:48.871480] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.401 [2024-07-15 16:02:48.871484] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bbea60) 00:16:55.401 [2024-07-15 16:02:48.871492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.401 [2024-07-15 16:02:48.871499] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.401 [2024-07-15 16:02:48.871503] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.401 [2024-07-15 16:02:48.871508] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1bbea60) 00:16:55.401 [2024-07-15 16:02:48.871514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.401 [2024-07-15 16:02:48.871521] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.401 [2024-07-15 16:02:48.871525] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.401 [2024-07-15 16:02:48.871530] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1bbea60) 00:16:55.401 [2024-07-15 16:02:48.871536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.401 [2024-07-15 16:02:48.871543] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.401 [2024-07-15 16:02:48.871547] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.401 [2024-07-15 16:02:48.871551] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.401 [2024-07-15 16:02:48.871557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.401 [2024-07-15 16:02:48.871563] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:16:55.401 [2024-07-15 16:02:48.871577] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:55.401 [2024-07-15 16:02:48.871585] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.401 [2024-07-15 16:02:48.871590] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bbea60) 00:16:55.401 [2024-07-15 16:02:48.871597] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.401 [2024-07-15 16:02:48.871618] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01840, cid 0, qid 0 00:16:55.401 [2024-07-15 16:02:48.871626] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c019c0, cid 1, qid 0 00:16:55.401 [2024-07-15 16:02:48.871631] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01b40, cid 2, qid 0 00:16:55.401 [2024-07-15 16:02:48.871637] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.401 [2024-07-15 16:02:48.871642] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01e40, cid 4, qid 0 00:16:55.401 [2024-07-15 16:02:48.871737] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.401 [2024-07-15 16:02:48.871744] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.401 [2024-07-15 16:02:48.871748] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.401 [2024-07-15 16:02:48.871752] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01e40) on tqpair=0x1bbea60 00:16:55.401 [2024-07-15 16:02:48.871759] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:16:55.401 [2024-07-15 16:02:48.871768] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:16:55.401 [2024-07-15 16:02:48.871781] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.401 [2024-07-15 16:02:48.871787] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bbea60) 00:16:55.401 [2024-07-15 16:02:48.871795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.401 [2024-07-15 16:02:48.871814] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01e40, cid 4, qid 0 00:16:55.401 [2024-07-15 16:02:48.871879] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:55.401 [2024-07-15 16:02:48.871886] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:55.401 [2024-07-15 16:02:48.871891] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:55.401 [2024-07-15 16:02:48.871895] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bbea60): datao=0, datal=4096, cccid=4 00:16:55.401 [2024-07-15 16:02:48.871900] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c01e40) on tqpair(0x1bbea60): expected_datao=0, payload_size=4096 00:16:55.401 [2024-07-15 16:02:48.871905] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.401 [2024-07-15 16:02:48.871913] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:55.401 [2024-07-15 16:02:48.871918] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:55.401 [2024-07-15 16:02:48.871926] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.401 [2024-07-15 16:02:48.871933] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.401 [2024-07-15 16:02:48.871937] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.401 [2024-07-15 16:02:48.871941] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01e40) on tqpair=0x1bbea60 00:16:55.401 [2024-07-15 16:02:48.871968] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:16:55.401 [2024-07-15 16:02:48.872002] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.401 [2024-07-15 16:02:48.872013] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bbea60) 00:16:55.401 [2024-07-15 16:02:48.872022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.401 [2024-07-15 16:02:48.872030] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.401 [2024-07-15 16:02:48.872035] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.401 [2024-07-15 16:02:48.872039] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1bbea60) 00:16:55.401 [2024-07-15 16:02:48.872046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.401 [2024-07-15 16:02:48.872074] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01e40, cid 4, qid 0 00:16:55.401 [2024-07-15 16:02:48.872082] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01fc0, cid 5, qid 0 00:16:55.401 [2024-07-15 16:02:48.872192] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:55.401 [2024-07-15 16:02:48.872203] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:55.401 [2024-07-15 16:02:48.872208] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:55.401 [2024-07-15 16:02:48.872213] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bbea60): datao=0, datal=1024, cccid=4 00:16:55.401 [2024-07-15 16:02:48.872218] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c01e40) on tqpair(0x1bbea60): expected_datao=0, payload_size=1024 00:16:55.401 [2024-07-15 16:02:48.872223] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.402 [2024-07-15 16:02:48.872231] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:55.402 [2024-07-15 16:02:48.872236] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:55.402 [2024-07-15 16:02:48.872242] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.402 [2024-07-15 16:02:48.872248] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.402 [2024-07-15 16:02:48.872252] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.402 [2024-07-15 16:02:48.872257] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01fc0) on tqpair=0x1bbea60 00:16:55.402 [2024-07-15 16:02:48.913024] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.402 [2024-07-15 16:02:48.913051] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.402 [2024-07-15 16:02:48.913057] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.402 [2024-07-15 16:02:48.913063] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01e40) on tqpair=0x1bbea60 00:16:55.402 [2024-07-15 16:02:48.913080] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.402 [2024-07-15 16:02:48.913087] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bbea60) 00:16:55.402 [2024-07-15 16:02:48.913097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.402 [2024-07-15 16:02:48.913131] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01e40, cid 4, qid 0 00:16:55.402 [2024-07-15 16:02:48.913232] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:55.402 [2024-07-15 16:02:48.913240] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:55.402 [2024-07-15 16:02:48.913245] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:55.402 [2024-07-15 16:02:48.913249] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bbea60): datao=0, datal=3072, cccid=4 00:16:55.402 [2024-07-15 16:02:48.913254] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c01e40) on tqpair(0x1bbea60): expected_datao=0, payload_size=3072 00:16:55.402 [2024-07-15 16:02:48.913260] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.402 [2024-07-15 16:02:48.913268] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:55.402 [2024-07-15 16:02:48.913273] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:55.402 [2024-07-15 16:02:48.913282] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.402 [2024-07-15 16:02:48.913288] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.402 [2024-07-15 16:02:48.913292] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.402 [2024-07-15 16:02:48.913297] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01e40) on tqpair=0x1bbea60 00:16:55.402 [2024-07-15 16:02:48.913309] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.402 [2024-07-15 16:02:48.913314] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bbea60) 00:16:55.402 [2024-07-15 16:02:48.913322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.402 [2024-07-15 16:02:48.913350] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01e40, cid 4, qid 0 00:16:55.402 [2024-07-15 16:02:48.913424] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:55.402 [2024-07-15 16:02:48.913431] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:55.402 [2024-07-15 16:02:48.913436] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:55.402 [2024-07-15 16:02:48.913440] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bbea60): datao=0, datal=8, cccid=4 00:16:55.402 [2024-07-15 16:02:48.913445] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c01e40) on tqpair(0x1bbea60): expected_datao=0, payload_size=8 00:16:55.402 [2024-07-15 16:02:48.913450] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.402 [2024-07-15 16:02:48.913457] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:55.402 [2024-07-15 16:02:48.913461] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:55.402 [2024-07-15 16:02:48.957993] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.402 [2024-07-15 16:02:48.958028] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.402 [2024-07-15 16:02:48.958034] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.402 [2024-07-15 16:02:48.958040] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01e40) on tqpair=0x1bbea60 00:16:55.402 ===================================================== 00:16:55.402 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:55.402 ===================================================== 00:16:55.402 Controller Capabilities/Features 00:16:55.402 ================================ 00:16:55.402 Vendor ID: 0000 00:16:55.402 Subsystem Vendor ID: 0000 00:16:55.402 Serial Number: .................... 00:16:55.402 Model Number: ........................................ 00:16:55.402 Firmware Version: 24.09 00:16:55.402 Recommended Arb Burst: 0 00:16:55.402 IEEE OUI Identifier: 00 00 00 00:16:55.402 Multi-path I/O 00:16:55.402 May have multiple subsystem ports: No 00:16:55.402 May have multiple controllers: No 00:16:55.402 Associated with SR-IOV VF: No 00:16:55.402 Max Data Transfer Size: 131072 00:16:55.402 Max Number of Namespaces: 0 00:16:55.402 Max Number of I/O Queues: 1024 00:16:55.402 NVMe Specification Version (VS): 1.3 00:16:55.402 NVMe Specification Version (Identify): 1.3 00:16:55.402 Maximum Queue Entries: 128 00:16:55.402 Contiguous Queues Required: Yes 00:16:55.402 Arbitration Mechanisms Supported 00:16:55.402 Weighted Round Robin: Not Supported 00:16:55.402 Vendor Specific: Not Supported 00:16:55.402 Reset Timeout: 15000 ms 00:16:55.402 Doorbell Stride: 4 bytes 00:16:55.402 NVM Subsystem Reset: Not Supported 00:16:55.402 Command Sets Supported 00:16:55.402 NVM Command Set: Supported 00:16:55.402 Boot Partition: Not Supported 00:16:55.402 Memory Page Size Minimum: 4096 bytes 00:16:55.402 Memory Page Size Maximum: 4096 bytes 00:16:55.402 Persistent Memory Region: Not Supported 00:16:55.402 Optional Asynchronous Events Supported 00:16:55.402 Namespace Attribute Notices: Not Supported 00:16:55.402 Firmware Activation Notices: Not Supported 00:16:55.402 ANA Change Notices: Not Supported 00:16:55.402 PLE Aggregate Log Change Notices: Not Supported 00:16:55.402 LBA Status Info Alert Notices: Not Supported 00:16:55.402 EGE Aggregate Log Change Notices: Not Supported 00:16:55.402 Normal NVM Subsystem Shutdown event: Not Supported 00:16:55.402 Zone Descriptor Change Notices: Not Supported 00:16:55.402 Discovery Log Change Notices: Supported 00:16:55.402 Controller Attributes 00:16:55.402 128-bit Host Identifier: Not Supported 00:16:55.402 Non-Operational Permissive Mode: Not Supported 00:16:55.402 NVM Sets: Not Supported 00:16:55.402 Read Recovery Levels: Not Supported 00:16:55.402 Endurance Groups: Not Supported 00:16:55.402 Predictable Latency Mode: Not Supported 00:16:55.402 Traffic Based Keep ALive: Not Supported 00:16:55.402 Namespace Granularity: Not Supported 00:16:55.402 SQ Associations: Not Supported 00:16:55.402 UUID List: Not Supported 00:16:55.402 Multi-Domain Subsystem: Not Supported 00:16:55.402 Fixed Capacity Management: Not Supported 00:16:55.402 Variable Capacity Management: Not Supported 00:16:55.402 Delete Endurance Group: Not Supported 00:16:55.402 Delete NVM Set: Not Supported 00:16:55.402 Extended LBA Formats Supported: Not Supported 00:16:55.402 Flexible Data Placement Supported: Not Supported 00:16:55.402 00:16:55.402 Controller Memory Buffer Support 00:16:55.402 ================================ 00:16:55.402 Supported: No 00:16:55.402 00:16:55.402 Persistent Memory Region Support 00:16:55.402 ================================ 00:16:55.402 Supported: No 00:16:55.402 00:16:55.402 Admin Command Set Attributes 00:16:55.402 ============================ 00:16:55.402 Security Send/Receive: Not Supported 00:16:55.402 Format NVM: Not Supported 00:16:55.402 Firmware Activate/Download: Not Supported 00:16:55.402 Namespace Management: Not Supported 00:16:55.402 Device Self-Test: Not Supported 00:16:55.402 Directives: Not Supported 00:16:55.402 NVMe-MI: Not Supported 00:16:55.402 Virtualization Management: Not Supported 00:16:55.402 Doorbell Buffer Config: Not Supported 00:16:55.402 Get LBA Status Capability: Not Supported 00:16:55.402 Command & Feature Lockdown Capability: Not Supported 00:16:55.402 Abort Command Limit: 1 00:16:55.402 Async Event Request Limit: 4 00:16:55.402 Number of Firmware Slots: N/A 00:16:55.402 Firmware Slot 1 Read-Only: N/A 00:16:55.402 Firmware Activation Without Reset: N/A 00:16:55.402 Multiple Update Detection Support: N/A 00:16:55.402 Firmware Update Granularity: No Information Provided 00:16:55.402 Per-Namespace SMART Log: No 00:16:55.402 Asymmetric Namespace Access Log Page: Not Supported 00:16:55.402 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:55.402 Command Effects Log Page: Not Supported 00:16:55.402 Get Log Page Extended Data: Supported 00:16:55.402 Telemetry Log Pages: Not Supported 00:16:55.402 Persistent Event Log Pages: Not Supported 00:16:55.402 Supported Log Pages Log Page: May Support 00:16:55.402 Commands Supported & Effects Log Page: Not Supported 00:16:55.402 Feature Identifiers & Effects Log Page:May Support 00:16:55.402 NVMe-MI Commands & Effects Log Page: May Support 00:16:55.402 Data Area 4 for Telemetry Log: Not Supported 00:16:55.402 Error Log Page Entries Supported: 128 00:16:55.402 Keep Alive: Not Supported 00:16:55.402 00:16:55.402 NVM Command Set Attributes 00:16:55.402 ========================== 00:16:55.402 Submission Queue Entry Size 00:16:55.402 Max: 1 00:16:55.402 Min: 1 00:16:55.403 Completion Queue Entry Size 00:16:55.403 Max: 1 00:16:55.403 Min: 1 00:16:55.403 Number of Namespaces: 0 00:16:55.403 Compare Command: Not Supported 00:16:55.403 Write Uncorrectable Command: Not Supported 00:16:55.403 Dataset Management Command: Not Supported 00:16:55.403 Write Zeroes Command: Not Supported 00:16:55.403 Set Features Save Field: Not Supported 00:16:55.403 Reservations: Not Supported 00:16:55.403 Timestamp: Not Supported 00:16:55.403 Copy: Not Supported 00:16:55.403 Volatile Write Cache: Not Present 00:16:55.403 Atomic Write Unit (Normal): 1 00:16:55.403 Atomic Write Unit (PFail): 1 00:16:55.403 Atomic Compare & Write Unit: 1 00:16:55.403 Fused Compare & Write: Supported 00:16:55.403 Scatter-Gather List 00:16:55.403 SGL Command Set: Supported 00:16:55.403 SGL Keyed: Supported 00:16:55.403 SGL Bit Bucket Descriptor: Not Supported 00:16:55.403 SGL Metadata Pointer: Not Supported 00:16:55.403 Oversized SGL: Not Supported 00:16:55.403 SGL Metadata Address: Not Supported 00:16:55.403 SGL Offset: Supported 00:16:55.403 Transport SGL Data Block: Not Supported 00:16:55.403 Replay Protected Memory Block: Not Supported 00:16:55.403 00:16:55.403 Firmware Slot Information 00:16:55.403 ========================= 00:16:55.403 Active slot: 0 00:16:55.403 00:16:55.403 00:16:55.403 Error Log 00:16:55.403 ========= 00:16:55.403 00:16:55.403 Active Namespaces 00:16:55.403 ================= 00:16:55.403 Discovery Log Page 00:16:55.403 ================== 00:16:55.403 Generation Counter: 2 00:16:55.403 Number of Records: 2 00:16:55.403 Record Format: 0 00:16:55.403 00:16:55.403 Discovery Log Entry 0 00:16:55.403 ---------------------- 00:16:55.403 Transport Type: 3 (TCP) 00:16:55.403 Address Family: 1 (IPv4) 00:16:55.403 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:55.403 Entry Flags: 00:16:55.403 Duplicate Returned Information: 1 00:16:55.403 Explicit Persistent Connection Support for Discovery: 1 00:16:55.403 Transport Requirements: 00:16:55.403 Secure Channel: Not Required 00:16:55.403 Port ID: 0 (0x0000) 00:16:55.403 Controller ID: 65535 (0xffff) 00:16:55.403 Admin Max SQ Size: 128 00:16:55.403 Transport Service Identifier: 4420 00:16:55.403 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:55.403 Transport Address: 10.0.0.2 00:16:55.403 Discovery Log Entry 1 00:16:55.403 ---------------------- 00:16:55.403 Transport Type: 3 (TCP) 00:16:55.403 Address Family: 1 (IPv4) 00:16:55.403 Subsystem Type: 2 (NVM Subsystem) 00:16:55.403 Entry Flags: 00:16:55.403 Duplicate Returned Information: 0 00:16:55.403 Explicit Persistent Connection Support for Discovery: 0 00:16:55.403 Transport Requirements: 00:16:55.403 Secure Channel: Not Required 00:16:55.403 Port ID: 0 (0x0000) 00:16:55.403 Controller ID: 65535 (0xffff) 00:16:55.403 Admin Max SQ Size: 128 00:16:55.403 Transport Service Identifier: 4420 00:16:55.403 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:16:55.403 Transport Address: 10.0.0.2 [2024-07-15 16:02:48.958183] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:16:55.403 [2024-07-15 16:02:48.958200] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01840) on tqpair=0x1bbea60 00:16:55.403 [2024-07-15 16:02:48.958209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.403 [2024-07-15 16:02:48.958216] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c019c0) on tqpair=0x1bbea60 00:16:55.403 [2024-07-15 16:02:48.958222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.403 [2024-07-15 16:02:48.958228] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01b40) on tqpair=0x1bbea60 00:16:55.403 [2024-07-15 16:02:48.958233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.403 [2024-07-15 16:02:48.958239] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.403 [2024-07-15 16:02:48.958244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.403 [2024-07-15 16:02:48.958257] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.403 [2024-07-15 16:02:48.958263] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.403 [2024-07-15 16:02:48.958267] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.403 [2024-07-15 16:02:48.958278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.403 [2024-07-15 16:02:48.958306] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.403 [2024-07-15 16:02:48.958379] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.403 [2024-07-15 16:02:48.958387] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.403 [2024-07-15 16:02:48.958391] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.403 [2024-07-15 16:02:48.958396] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.403 [2024-07-15 16:02:48.958405] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.403 [2024-07-15 16:02:48.958410] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.403 [2024-07-15 16:02:48.958414] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.403 [2024-07-15 16:02:48.958422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.403 [2024-07-15 16:02:48.958447] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.403 [2024-07-15 16:02:48.958527] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.403 [2024-07-15 16:02:48.958534] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.403 [2024-07-15 16:02:48.958538] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.403 [2024-07-15 16:02:48.958542] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.403 [2024-07-15 16:02:48.958548] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:16:55.403 [2024-07-15 16:02:48.958554] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:16:55.403 [2024-07-15 16:02:48.958565] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.403 [2024-07-15 16:02:48.958570] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.403 [2024-07-15 16:02:48.958574] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.403 [2024-07-15 16:02:48.958582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.403 [2024-07-15 16:02:48.958601] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.403 [2024-07-15 16:02:48.958656] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.403 [2024-07-15 16:02:48.958663] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.403 [2024-07-15 16:02:48.958667] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.403 [2024-07-15 16:02:48.958672] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.403 [2024-07-15 16:02:48.958683] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.403 [2024-07-15 16:02:48.958689] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.403 [2024-07-15 16:02:48.958693] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.403 [2024-07-15 16:02:48.958701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.403 [2024-07-15 16:02:48.958719] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.403 [2024-07-15 16:02:48.958773] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.403 [2024-07-15 16:02:48.958780] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.403 [2024-07-15 16:02:48.958784] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.403 [2024-07-15 16:02:48.958788] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.403 [2024-07-15 16:02:48.958799] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.403 [2024-07-15 16:02:48.958804] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.403 [2024-07-15 16:02:48.958809] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.403 [2024-07-15 16:02:48.958817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.403 [2024-07-15 16:02:48.958835] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.403 [2024-07-15 16:02:48.958890] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.403 [2024-07-15 16:02:48.958897] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.403 [2024-07-15 16:02:48.958901] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.403 [2024-07-15 16:02:48.958906] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.403 [2024-07-15 16:02:48.958917] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.403 [2024-07-15 16:02:48.958922] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.403 [2024-07-15 16:02:48.958926] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.403 [2024-07-15 16:02:48.958934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.403 [2024-07-15 16:02:48.958952] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.403 [2024-07-15 16:02:48.959020] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.403 [2024-07-15 16:02:48.959029] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.403 [2024-07-15 16:02:48.959034] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.403 [2024-07-15 16:02:48.959038] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.403 [2024-07-15 16:02:48.959050] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.403 [2024-07-15 16:02:48.959055] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.403 [2024-07-15 16:02:48.959059] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.403 [2024-07-15 16:02:48.959067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.403 [2024-07-15 16:02:48.959088] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.404 [2024-07-15 16:02:48.959142] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.404 [2024-07-15 16:02:48.959149] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.404 [2024-07-15 16:02:48.959153] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.404 [2024-07-15 16:02:48.959158] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.404 [2024-07-15 16:02:48.959168] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.404 [2024-07-15 16:02:48.959174] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.404 [2024-07-15 16:02:48.959178] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.404 [2024-07-15 16:02:48.959186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.404 [2024-07-15 16:02:48.959204] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.404 [2024-07-15 16:02:48.959270] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.404 [2024-07-15 16:02:48.959278] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.404 [2024-07-15 16:02:48.959282] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.404 [2024-07-15 16:02:48.959287] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.404 [2024-07-15 16:02:48.959297] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.404 [2024-07-15 16:02:48.959302] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.404 [2024-07-15 16:02:48.959307] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.404 [2024-07-15 16:02:48.959314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.404 [2024-07-15 16:02:48.959332] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.404 [2024-07-15 16:02:48.959386] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.404 [2024-07-15 16:02:48.959392] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.404 [2024-07-15 16:02:48.959396] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.404 [2024-07-15 16:02:48.959401] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.404 [2024-07-15 16:02:48.959412] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.404 [2024-07-15 16:02:48.959417] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.404 [2024-07-15 16:02:48.959422] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.404 [2024-07-15 16:02:48.959430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.404 [2024-07-15 16:02:48.959449] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.404 [2024-07-15 16:02:48.959500] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.404 [2024-07-15 16:02:48.959507] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.404 [2024-07-15 16:02:48.959511] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.404 [2024-07-15 16:02:48.959516] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.404 [2024-07-15 16:02:48.959527] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.404 [2024-07-15 16:02:48.959532] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.404 [2024-07-15 16:02:48.959537] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.404 [2024-07-15 16:02:48.959545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.404 [2024-07-15 16:02:48.959563] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.404 [2024-07-15 16:02:48.959620] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.404 [2024-07-15 16:02:48.959627] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.404 [2024-07-15 16:02:48.959631] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.404 [2024-07-15 16:02:48.959635] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.404 [2024-07-15 16:02:48.959646] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.404 [2024-07-15 16:02:48.959652] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.404 [2024-07-15 16:02:48.959656] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.404 [2024-07-15 16:02:48.959664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.404 [2024-07-15 16:02:48.959682] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.404 [2024-07-15 16:02:48.959731] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.404 [2024-07-15 16:02:48.959740] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.404 [2024-07-15 16:02:48.959744] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.404 [2024-07-15 16:02:48.959749] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.404 [2024-07-15 16:02:48.959760] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.404 [2024-07-15 16:02:48.959765] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.404 [2024-07-15 16:02:48.959769] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.404 [2024-07-15 16:02:48.959777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.404 [2024-07-15 16:02:48.959795] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.404 [2024-07-15 16:02:48.959853] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.404 [2024-07-15 16:02:48.959871] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.404 [2024-07-15 16:02:48.959876] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.404 [2024-07-15 16:02:48.959880] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.404 [2024-07-15 16:02:48.959892] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.404 [2024-07-15 16:02:48.959898] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.404 [2024-07-15 16:02:48.959902] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.404 [2024-07-15 16:02:48.959910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.404 [2024-07-15 16:02:48.959930] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.404 [2024-07-15 16:02:48.959995] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.404 [2024-07-15 16:02:48.960004] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.404 [2024-07-15 16:02:48.960008] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.404 [2024-07-15 16:02:48.960013] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.404 [2024-07-15 16:02:48.960024] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.404 [2024-07-15 16:02:48.960030] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.404 [2024-07-15 16:02:48.960034] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.404 [2024-07-15 16:02:48.960042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.404 [2024-07-15 16:02:48.960063] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.404 [2024-07-15 16:02:48.960124] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.404 [2024-07-15 16:02:48.960131] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.404 [2024-07-15 16:02:48.960135] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.404 [2024-07-15 16:02:48.960139] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.404 [2024-07-15 16:02:48.960150] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.404 [2024-07-15 16:02:48.960156] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.404 [2024-07-15 16:02:48.960160] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.404 [2024-07-15 16:02:48.960168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.404 [2024-07-15 16:02:48.960186] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.404 [2024-07-15 16:02:48.960239] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.404 [2024-07-15 16:02:48.960246] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.404 [2024-07-15 16:02:48.960250] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.404 [2024-07-15 16:02:48.960255] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.404 [2024-07-15 16:02:48.960266] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.404 [2024-07-15 16:02:48.960271] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.404 [2024-07-15 16:02:48.960276] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.404 [2024-07-15 16:02:48.960283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.404 [2024-07-15 16:02:48.960301] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.404 [2024-07-15 16:02:48.960363] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.404 [2024-07-15 16:02:48.960378] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.404 [2024-07-15 16:02:48.960383] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.405 [2024-07-15 16:02:48.960388] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.405 [2024-07-15 16:02:48.960400] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.405 [2024-07-15 16:02:48.960405] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.405 [2024-07-15 16:02:48.960410] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.405 [2024-07-15 16:02:48.960417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.405 [2024-07-15 16:02:48.960437] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.405 [2024-07-15 16:02:48.960494] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.405 [2024-07-15 16:02:48.960501] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.405 [2024-07-15 16:02:48.960505] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.405 [2024-07-15 16:02:48.960510] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.405 [2024-07-15 16:02:48.960521] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.405 [2024-07-15 16:02:48.960526] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.405 [2024-07-15 16:02:48.960530] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.405 [2024-07-15 16:02:48.960538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.405 [2024-07-15 16:02:48.960556] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.405 [2024-07-15 16:02:48.960607] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.405 [2024-07-15 16:02:48.960614] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.405 [2024-07-15 16:02:48.960618] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.405 [2024-07-15 16:02:48.960623] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.405 [2024-07-15 16:02:48.960634] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.405 [2024-07-15 16:02:48.960639] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.405 [2024-07-15 16:02:48.960643] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.405 [2024-07-15 16:02:48.960651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.405 [2024-07-15 16:02:48.960668] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.405 [2024-07-15 16:02:48.960725] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.405 [2024-07-15 16:02:48.960732] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.405 [2024-07-15 16:02:48.960736] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.405 [2024-07-15 16:02:48.960741] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.405 [2024-07-15 16:02:48.960752] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.405 [2024-07-15 16:02:48.960757] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.405 [2024-07-15 16:02:48.960761] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.405 [2024-07-15 16:02:48.960769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.405 [2024-07-15 16:02:48.960787] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.405 [2024-07-15 16:02:48.960838] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.405 [2024-07-15 16:02:48.960845] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.405 [2024-07-15 16:02:48.960849] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.405 [2024-07-15 16:02:48.960854] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.405 [2024-07-15 16:02:48.960865] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.405 [2024-07-15 16:02:48.960871] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.405 [2024-07-15 16:02:48.960875] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.405 [2024-07-15 16:02:48.960883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.405 [2024-07-15 16:02:48.960900] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.405 [2024-07-15 16:02:48.960954] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.405 [2024-07-15 16:02:48.960975] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.405 [2024-07-15 16:02:48.960980] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.405 [2024-07-15 16:02:48.960985] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.405 [2024-07-15 16:02:48.960997] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.405 [2024-07-15 16:02:48.961003] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.405 [2024-07-15 16:02:48.961007] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.405 [2024-07-15 16:02:48.961015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.405 [2024-07-15 16:02:48.961036] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.405 [2024-07-15 16:02:48.961091] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.405 [2024-07-15 16:02:48.961099] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.405 [2024-07-15 16:02:48.961103] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.405 [2024-07-15 16:02:48.961107] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.405 [2024-07-15 16:02:48.961118] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.405 [2024-07-15 16:02:48.961124] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.405 [2024-07-15 16:02:48.961128] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.405 [2024-07-15 16:02:48.961136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.405 [2024-07-15 16:02:48.961154] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.405 [2024-07-15 16:02:48.961206] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.405 [2024-07-15 16:02:48.961213] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.405 [2024-07-15 16:02:48.961217] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.405 [2024-07-15 16:02:48.961222] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.405 [2024-07-15 16:02:48.961233] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.405 [2024-07-15 16:02:48.961238] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.405 [2024-07-15 16:02:48.961242] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.405 [2024-07-15 16:02:48.961250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.405 [2024-07-15 16:02:48.961267] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.405 [2024-07-15 16:02:48.961322] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.405 [2024-07-15 16:02:48.961333] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.405 [2024-07-15 16:02:48.961338] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.405 [2024-07-15 16:02:48.961343] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.405 [2024-07-15 16:02:48.961354] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.405 [2024-07-15 16:02:48.961359] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.405 [2024-07-15 16:02:48.961364] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.405 [2024-07-15 16:02:48.961372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.405 [2024-07-15 16:02:48.961391] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.405 [2024-07-15 16:02:48.961444] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.405 [2024-07-15 16:02:48.961451] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.405 [2024-07-15 16:02:48.961455] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.405 [2024-07-15 16:02:48.961460] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.405 [2024-07-15 16:02:48.961471] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.405 [2024-07-15 16:02:48.961476] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.405 [2024-07-15 16:02:48.961480] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.405 [2024-07-15 16:02:48.961488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.405 [2024-07-15 16:02:48.961506] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.405 [2024-07-15 16:02:48.961557] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.405 [2024-07-15 16:02:48.961564] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.405 [2024-07-15 16:02:48.961568] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.405 [2024-07-15 16:02:48.961572] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.405 [2024-07-15 16:02:48.961583] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.405 [2024-07-15 16:02:48.961588] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.405 [2024-07-15 16:02:48.961593] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.405 [2024-07-15 16:02:48.961600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.405 [2024-07-15 16:02:48.961618] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.405 [2024-07-15 16:02:48.961670] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.405 [2024-07-15 16:02:48.961682] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.405 [2024-07-15 16:02:48.961686] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.405 [2024-07-15 16:02:48.961691] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.405 [2024-07-15 16:02:48.961702] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.405 [2024-07-15 16:02:48.961708] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.405 [2024-07-15 16:02:48.961712] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.405 [2024-07-15 16:02:48.961720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.405 [2024-07-15 16:02:48.961739] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.405 [2024-07-15 16:02:48.961791] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.405 [2024-07-15 16:02:48.961802] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.405 [2024-07-15 16:02:48.961807] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.405 [2024-07-15 16:02:48.961811] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.405 [2024-07-15 16:02:48.961823] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.405 [2024-07-15 16:02:48.961828] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.405 [2024-07-15 16:02:48.961832] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.405 [2024-07-15 16:02:48.961840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.405 [2024-07-15 16:02:48.961870] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.406 [2024-07-15 16:02:48.961927] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.406 [2024-07-15 16:02:48.961938] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.406 [2024-07-15 16:02:48.961943] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.406 [2024-07-15 16:02:48.961947] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.406 [2024-07-15 16:02:48.961970] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.406 [2024-07-15 16:02:48.961977] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.406 [2024-07-15 16:02:48.961981] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.406 [2024-07-15 16:02:48.961989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.406 [2024-07-15 16:02:48.962011] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.406 [2024-07-15 16:02:48.962066] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.406 [2024-07-15 16:02:48.962073] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.406 [2024-07-15 16:02:48.962087] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.406 [2024-07-15 16:02:48.962091] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.406 [2024-07-15 16:02:48.962103] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.406 [2024-07-15 16:02:48.962108] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.406 [2024-07-15 16:02:48.962112] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.406 [2024-07-15 16:02:48.962120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.406 [2024-07-15 16:02:48.962139] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.406 [2024-07-15 16:02:48.962197] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.406 [2024-07-15 16:02:48.962204] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.406 [2024-07-15 16:02:48.962208] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.406 [2024-07-15 16:02:48.962212] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.406 [2024-07-15 16:02:48.962223] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.406 [2024-07-15 16:02:48.962228] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.406 [2024-07-15 16:02:48.962232] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.406 [2024-07-15 16:02:48.962240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.406 [2024-07-15 16:02:48.962258] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.406 [2024-07-15 16:02:48.962310] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.406 [2024-07-15 16:02:48.962318] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.406 [2024-07-15 16:02:48.962322] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.406 [2024-07-15 16:02:48.962326] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.406 [2024-07-15 16:02:48.962337] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.406 [2024-07-15 16:02:48.962342] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.406 [2024-07-15 16:02:48.962346] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.406 [2024-07-15 16:02:48.962354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.406 [2024-07-15 16:02:48.962372] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.406 [2024-07-15 16:02:48.962424] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.406 [2024-07-15 16:02:48.962435] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.406 [2024-07-15 16:02:48.962440] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.406 [2024-07-15 16:02:48.962445] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.406 [2024-07-15 16:02:48.962457] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.406 [2024-07-15 16:02:48.962462] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.406 [2024-07-15 16:02:48.962467] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.406 [2024-07-15 16:02:48.962475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.406 [2024-07-15 16:02:48.962494] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.406 [2024-07-15 16:02:48.962548] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.406 [2024-07-15 16:02:48.962559] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.406 [2024-07-15 16:02:48.962564] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.406 [2024-07-15 16:02:48.962569] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.406 [2024-07-15 16:02:48.962580] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.406 [2024-07-15 16:02:48.962586] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.406 [2024-07-15 16:02:48.962590] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.406 [2024-07-15 16:02:48.962598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.406 [2024-07-15 16:02:48.962617] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.406 [2024-07-15 16:02:48.962674] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.406 [2024-07-15 16:02:48.962681] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.406 [2024-07-15 16:02:48.962685] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.406 [2024-07-15 16:02:48.962690] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.406 [2024-07-15 16:02:48.962701] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.406 [2024-07-15 16:02:48.962706] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.406 [2024-07-15 16:02:48.962710] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.406 [2024-07-15 16:02:48.962718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.406 [2024-07-15 16:02:48.962736] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.406 [2024-07-15 16:02:48.962791] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.406 [2024-07-15 16:02:48.962798] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.406 [2024-07-15 16:02:48.962802] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.406 [2024-07-15 16:02:48.962806] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.406 [2024-07-15 16:02:48.962817] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.406 [2024-07-15 16:02:48.962822] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.406 [2024-07-15 16:02:48.962827] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.406 [2024-07-15 16:02:48.962834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.406 [2024-07-15 16:02:48.962852] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.406 [2024-07-15 16:02:48.962906] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.406 [2024-07-15 16:02:48.962913] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.406 [2024-07-15 16:02:48.962917] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.406 [2024-07-15 16:02:48.962922] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.406 [2024-07-15 16:02:48.962933] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.406 [2024-07-15 16:02:48.962938] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.406 [2024-07-15 16:02:48.962942] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.406 [2024-07-15 16:02:48.962950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.406 [2024-07-15 16:02:48.962981] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.406 [2024-07-15 16:02:48.963037] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.406 [2024-07-15 16:02:48.963045] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.406 [2024-07-15 16:02:48.963049] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.406 [2024-07-15 16:02:48.963053] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.406 [2024-07-15 16:02:48.963065] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.406 [2024-07-15 16:02:48.963070] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.406 [2024-07-15 16:02:48.963074] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.406 [2024-07-15 16:02:48.963082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.406 [2024-07-15 16:02:48.963101] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.406 [2024-07-15 16:02:48.963152] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.406 [2024-07-15 16:02:48.963159] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.406 [2024-07-15 16:02:48.963163] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.406 [2024-07-15 16:02:48.963167] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.406 [2024-07-15 16:02:48.963178] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.406 [2024-07-15 16:02:48.963183] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.406 [2024-07-15 16:02:48.963188] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.406 [2024-07-15 16:02:48.963195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.406 [2024-07-15 16:02:48.963213] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.406 [2024-07-15 16:02:48.963267] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.406 [2024-07-15 16:02:48.963274] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.406 [2024-07-15 16:02:48.963278] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.406 [2024-07-15 16:02:48.963283] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.406 [2024-07-15 16:02:48.963293] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.406 [2024-07-15 16:02:48.963299] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.406 [2024-07-15 16:02:48.963303] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.406 [2024-07-15 16:02:48.963310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.406 [2024-07-15 16:02:48.963328] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.406 [2024-07-15 16:02:48.963383] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.406 [2024-07-15 16:02:48.963390] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.406 [2024-07-15 16:02:48.963394] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.406 [2024-07-15 16:02:48.963399] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.406 [2024-07-15 16:02:48.963410] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.407 [2024-07-15 16:02:48.963415] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.407 [2024-07-15 16:02:48.963419] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.407 [2024-07-15 16:02:48.963427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.407 [2024-07-15 16:02:48.963444] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.407 [2024-07-15 16:02:48.963498] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.407 [2024-07-15 16:02:48.963510] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.407 [2024-07-15 16:02:48.963514] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.407 [2024-07-15 16:02:48.963519] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.407 [2024-07-15 16:02:48.963530] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.407 [2024-07-15 16:02:48.963536] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.407 [2024-07-15 16:02:48.963540] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.407 [2024-07-15 16:02:48.963548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.407 [2024-07-15 16:02:48.963566] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.407 [2024-07-15 16:02:48.963618] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.407 [2024-07-15 16:02:48.963625] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.407 [2024-07-15 16:02:48.963629] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.407 [2024-07-15 16:02:48.963634] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.407 [2024-07-15 16:02:48.963645] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.407 [2024-07-15 16:02:48.963650] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.407 [2024-07-15 16:02:48.963654] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.407 [2024-07-15 16:02:48.963662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.407 [2024-07-15 16:02:48.963680] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.407 [2024-07-15 16:02:48.963731] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.407 [2024-07-15 16:02:48.963751] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.407 [2024-07-15 16:02:48.963757] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.407 [2024-07-15 16:02:48.963761] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.407 [2024-07-15 16:02:48.963773] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.407 [2024-07-15 16:02:48.963778] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.407 [2024-07-15 16:02:48.963783] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.407 [2024-07-15 16:02:48.963791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.407 [2024-07-15 16:02:48.963810] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.407 [2024-07-15 16:02:48.963862] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.407 [2024-07-15 16:02:48.963873] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.407 [2024-07-15 16:02:48.963878] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.407 [2024-07-15 16:02:48.963883] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.407 [2024-07-15 16:02:48.963894] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.407 [2024-07-15 16:02:48.963900] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.407 [2024-07-15 16:02:48.963904] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.407 [2024-07-15 16:02:48.963912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.407 [2024-07-15 16:02:48.963931] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.407 [2024-07-15 16:02:48.963995] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.407 [2024-07-15 16:02:48.964010] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.407 [2024-07-15 16:02:48.964015] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.407 [2024-07-15 16:02:48.964020] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.407 [2024-07-15 16:02:48.964031] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.407 [2024-07-15 16:02:48.964037] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.407 [2024-07-15 16:02:48.964041] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.407 [2024-07-15 16:02:48.964049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.407 [2024-07-15 16:02:48.964069] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.407 [2024-07-15 16:02:48.964121] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.407 [2024-07-15 16:02:48.964129] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.407 [2024-07-15 16:02:48.964133] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.407 [2024-07-15 16:02:48.964137] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.407 [2024-07-15 16:02:48.964148] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.407 [2024-07-15 16:02:48.964154] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.407 [2024-07-15 16:02:48.964159] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.407 [2024-07-15 16:02:48.964166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.407 [2024-07-15 16:02:48.964184] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.407 [2024-07-15 16:02:48.964238] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.407 [2024-07-15 16:02:48.964245] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.407 [2024-07-15 16:02:48.964249] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.407 [2024-07-15 16:02:48.964254] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.407 [2024-07-15 16:02:48.964265] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.407 [2024-07-15 16:02:48.964270] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.407 [2024-07-15 16:02:48.964274] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.407 [2024-07-15 16:02:48.964282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.407 [2024-07-15 16:02:48.964299] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.407 [2024-07-15 16:02:48.964351] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.407 [2024-07-15 16:02:48.964358] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.407 [2024-07-15 16:02:48.964362] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.407 [2024-07-15 16:02:48.964367] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.407 [2024-07-15 16:02:48.964377] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.407 [2024-07-15 16:02:48.964383] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.407 [2024-07-15 16:02:48.964387] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.407 [2024-07-15 16:02:48.964395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.407 [2024-07-15 16:02:48.964413] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.407 [2024-07-15 16:02:48.964468] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.407 [2024-07-15 16:02:48.964480] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.407 [2024-07-15 16:02:48.964484] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.407 [2024-07-15 16:02:48.964489] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.407 [2024-07-15 16:02:48.964500] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.407 [2024-07-15 16:02:48.964506] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.407 [2024-07-15 16:02:48.964510] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.407 [2024-07-15 16:02:48.964518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.407 [2024-07-15 16:02:48.964537] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.407 [2024-07-15 16:02:48.964591] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.407 [2024-07-15 16:02:48.964598] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.407 [2024-07-15 16:02:48.964602] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.407 [2024-07-15 16:02:48.964607] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.407 [2024-07-15 16:02:48.964617] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.407 [2024-07-15 16:02:48.964623] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.407 [2024-07-15 16:02:48.964627] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.407 [2024-07-15 16:02:48.964635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.407 [2024-07-15 16:02:48.964652] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.407 [2024-07-15 16:02:48.964704] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.407 [2024-07-15 16:02:48.964711] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.407 [2024-07-15 16:02:48.964715] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.407 [2024-07-15 16:02:48.964719] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.407 [2024-07-15 16:02:48.964730] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.407 [2024-07-15 16:02:48.964735] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.407 [2024-07-15 16:02:48.964740] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.407 [2024-07-15 16:02:48.964747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.407 [2024-07-15 16:02:48.964765] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.407 [2024-07-15 16:02:48.964821] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.407 [2024-07-15 16:02:48.964828] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.407 [2024-07-15 16:02:48.964832] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.407 [2024-07-15 16:02:48.964836] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.407 [2024-07-15 16:02:48.964847] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.407 [2024-07-15 16:02:48.964853] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.407 [2024-07-15 16:02:48.964857] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.407 [2024-07-15 16:02:48.964865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.407 [2024-07-15 16:02:48.964883] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.408 [2024-07-15 16:02:48.964933] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.408 [2024-07-15 16:02:48.964941] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.408 [2024-07-15 16:02:48.964945] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.408 [2024-07-15 16:02:48.964949] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.408 [2024-07-15 16:02:48.964982] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.408 [2024-07-15 16:02:48.964988] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.408 [2024-07-15 16:02:48.964992] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.408 [2024-07-15 16:02:48.965000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.408 [2024-07-15 16:02:48.965020] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.408 [2024-07-15 16:02:48.965073] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.408 [2024-07-15 16:02:48.965080] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.408 [2024-07-15 16:02:48.965085] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.408 [2024-07-15 16:02:48.965089] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.408 [2024-07-15 16:02:48.965100] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.408 [2024-07-15 16:02:48.965106] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.408 [2024-07-15 16:02:48.965110] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.408 [2024-07-15 16:02:48.965118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.408 [2024-07-15 16:02:48.965136] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.408 [2024-07-15 16:02:48.965192] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.408 [2024-07-15 16:02:48.965203] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.408 [2024-07-15 16:02:48.965208] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.408 [2024-07-15 16:02:48.965213] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.408 [2024-07-15 16:02:48.965224] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.408 [2024-07-15 16:02:48.965230] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.408 [2024-07-15 16:02:48.965234] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.408 [2024-07-15 16:02:48.965241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.408 [2024-07-15 16:02:48.965260] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.408 [2024-07-15 16:02:48.965316] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.408 [2024-07-15 16:02:48.965327] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.408 [2024-07-15 16:02:48.965332] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.408 [2024-07-15 16:02:48.965336] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.408 [2024-07-15 16:02:48.965348] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.408 [2024-07-15 16:02:48.965353] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.408 [2024-07-15 16:02:48.965357] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.408 [2024-07-15 16:02:48.965365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.408 [2024-07-15 16:02:48.965384] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.408 [2024-07-15 16:02:48.965435] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.408 [2024-07-15 16:02:48.965443] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.408 [2024-07-15 16:02:48.965447] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.408 [2024-07-15 16:02:48.965452] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.408 [2024-07-15 16:02:48.965463] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.408 [2024-07-15 16:02:48.965468] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.408 [2024-07-15 16:02:48.965473] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.408 [2024-07-15 16:02:48.965480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.408 [2024-07-15 16:02:48.965498] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.408 [2024-07-15 16:02:48.965550] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.408 [2024-07-15 16:02:48.965557] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.408 [2024-07-15 16:02:48.965561] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.408 [2024-07-15 16:02:48.965566] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.408 [2024-07-15 16:02:48.965577] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.408 [2024-07-15 16:02:48.965582] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.408 [2024-07-15 16:02:48.965586] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.408 [2024-07-15 16:02:48.965594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.408 [2024-07-15 16:02:48.965612] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.408 [2024-07-15 16:02:48.965666] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.408 [2024-07-15 16:02:48.965676] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.408 [2024-07-15 16:02:48.965681] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.408 [2024-07-15 16:02:48.965686] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.408 [2024-07-15 16:02:48.965697] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.408 [2024-07-15 16:02:48.965703] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.408 [2024-07-15 16:02:48.965707] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.408 [2024-07-15 16:02:48.965715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.408 [2024-07-15 16:02:48.965733] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.408 [2024-07-15 16:02:48.965788] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.408 [2024-07-15 16:02:48.965799] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.408 [2024-07-15 16:02:48.965804] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.408 [2024-07-15 16:02:48.965808] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.408 [2024-07-15 16:02:48.965820] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.408 [2024-07-15 16:02:48.965825] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.408 [2024-07-15 16:02:48.965830] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.408 [2024-07-15 16:02:48.965837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.408 [2024-07-15 16:02:48.965866] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.408 [2024-07-15 16:02:48.965921] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.408 [2024-07-15 16:02:48.965929] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.408 [2024-07-15 16:02:48.965933] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.408 [2024-07-15 16:02:48.965938] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.408 [2024-07-15 16:02:48.965949] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.408 [2024-07-15 16:02:48.965954] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.408 [2024-07-15 16:02:48.969979] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbea60) 00:16:55.408 [2024-07-15 16:02:48.969992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.408 [2024-07-15 16:02:48.970020] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01cc0, cid 3, qid 0 00:16:55.408 [2024-07-15 16:02:48.970100] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.408 [2024-07-15 16:02:48.970108] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.408 [2024-07-15 16:02:48.970112] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.408 [2024-07-15 16:02:48.970117] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01cc0) on tqpair=0x1bbea60 00:16:55.408 [2024-07-15 16:02:48.970127] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 11 milliseconds 00:16:55.408 00:16:55.408 16:02:48 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:16:55.408 [2024-07-15 16:02:49.011871] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:16:55.408 [2024-07-15 16:02:49.011941] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87297 ] 00:16:55.670 [2024-07-15 16:02:49.152236] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:16:55.670 [2024-07-15 16:02:49.152299] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:55.670 [2024-07-15 16:02:49.152307] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:55.670 [2024-07-15 16:02:49.152318] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:55.670 [2024-07-15 16:02:49.152325] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:55.670 [2024-07-15 16:02:49.152496] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:16:55.670 [2024-07-15 16:02:49.152561] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x68ba60 0 00:16:55.670 [2024-07-15 16:02:49.159042] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:55.670 [2024-07-15 16:02:49.159068] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:55.670 [2024-07-15 16:02:49.159075] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:55.670 [2024-07-15 16:02:49.159079] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:55.670 [2024-07-15 16:02:49.159126] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.670 [2024-07-15 16:02:49.159133] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.670 [2024-07-15 16:02:49.159137] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x68ba60) 00:16:55.670 [2024-07-15 16:02:49.159150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:55.670 [2024-07-15 16:02:49.159184] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce840, cid 0, qid 0 00:16:55.670 [2024-07-15 16:02:49.167018] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.670 [2024-07-15 16:02:49.167041] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.670 [2024-07-15 16:02:49.167046] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.670 [2024-07-15 16:02:49.167051] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce840) on tqpair=0x68ba60 00:16:55.670 [2024-07-15 16:02:49.167062] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:55.670 [2024-07-15 16:02:49.167070] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:16:55.670 [2024-07-15 16:02:49.167077] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:16:55.670 [2024-07-15 16:02:49.167095] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.670 [2024-07-15 16:02:49.167100] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.670 [2024-07-15 16:02:49.167105] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x68ba60) 00:16:55.670 [2024-07-15 16:02:49.167115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.670 [2024-07-15 16:02:49.167144] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce840, cid 0, qid 0 00:16:55.670 [2024-07-15 16:02:49.167215] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.670 [2024-07-15 16:02:49.167222] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.670 [2024-07-15 16:02:49.167226] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.670 [2024-07-15 16:02:49.167231] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce840) on tqpair=0x68ba60 00:16:55.670 [2024-07-15 16:02:49.167236] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:16:55.670 [2024-07-15 16:02:49.167245] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:16:55.670 [2024-07-15 16:02:49.167252] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.670 [2024-07-15 16:02:49.167257] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.670 [2024-07-15 16:02:49.167261] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x68ba60) 00:16:55.670 [2024-07-15 16:02:49.167269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.670 [2024-07-15 16:02:49.167288] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce840, cid 0, qid 0 00:16:55.670 [2024-07-15 16:02:49.167351] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.670 [2024-07-15 16:02:49.167358] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.670 [2024-07-15 16:02:49.167362] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.670 [2024-07-15 16:02:49.167367] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce840) on tqpair=0x68ba60 00:16:55.670 [2024-07-15 16:02:49.167373] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:16:55.670 [2024-07-15 16:02:49.167382] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:16:55.670 [2024-07-15 16:02:49.167390] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.670 [2024-07-15 16:02:49.167394] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.670 [2024-07-15 16:02:49.167398] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x68ba60) 00:16:55.670 [2024-07-15 16:02:49.167406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.670 [2024-07-15 16:02:49.167425] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce840, cid 0, qid 0 00:16:55.670 [2024-07-15 16:02:49.167527] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.670 [2024-07-15 16:02:49.167534] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.670 [2024-07-15 16:02:49.167538] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.670 [2024-07-15 16:02:49.167542] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce840) on tqpair=0x68ba60 00:16:55.670 [2024-07-15 16:02:49.167548] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:55.670 [2024-07-15 16:02:49.167559] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.670 [2024-07-15 16:02:49.167563] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.670 [2024-07-15 16:02:49.167567] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x68ba60) 00:16:55.670 [2024-07-15 16:02:49.167575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.670 [2024-07-15 16:02:49.167593] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce840, cid 0, qid 0 00:16:55.670 [2024-07-15 16:02:49.167883] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.670 [2024-07-15 16:02:49.167890] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.670 [2024-07-15 16:02:49.167894] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.670 [2024-07-15 16:02:49.167898] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce840) on tqpair=0x68ba60 00:16:55.670 [2024-07-15 16:02:49.167903] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:16:55.671 [2024-07-15 16:02:49.167908] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:16:55.671 [2024-07-15 16:02:49.167916] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:55.671 [2024-07-15 16:02:49.168023] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:16:55.671 [2024-07-15 16:02:49.168029] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:55.671 [2024-07-15 16:02:49.168039] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.671 [2024-07-15 16:02:49.168044] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.671 [2024-07-15 16:02:49.168048] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x68ba60) 00:16:55.671 [2024-07-15 16:02:49.168056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.671 [2024-07-15 16:02:49.168078] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce840, cid 0, qid 0 00:16:55.671 [2024-07-15 16:02:49.168442] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.671 [2024-07-15 16:02:49.168458] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.671 [2024-07-15 16:02:49.168463] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.671 [2024-07-15 16:02:49.168467] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce840) on tqpair=0x68ba60 00:16:55.671 [2024-07-15 16:02:49.168473] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:55.671 [2024-07-15 16:02:49.168484] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.671 [2024-07-15 16:02:49.168489] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.671 [2024-07-15 16:02:49.168493] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x68ba60) 00:16:55.671 [2024-07-15 16:02:49.168508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.671 [2024-07-15 16:02:49.168528] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce840, cid 0, qid 0 00:16:55.671 [2024-07-15 16:02:49.168601] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.671 [2024-07-15 16:02:49.168608] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.671 [2024-07-15 16:02:49.168612] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.671 [2024-07-15 16:02:49.168616] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce840) on tqpair=0x68ba60 00:16:55.671 [2024-07-15 16:02:49.168621] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:55.671 [2024-07-15 16:02:49.168642] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:16:55.671 [2024-07-15 16:02:49.168651] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:16:55.671 [2024-07-15 16:02:49.168662] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:16:55.671 [2024-07-15 16:02:49.168673] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.671 [2024-07-15 16:02:49.168678] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x68ba60) 00:16:55.671 [2024-07-15 16:02:49.168685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.671 [2024-07-15 16:02:49.168706] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce840, cid 0, qid 0 00:16:55.671 [2024-07-15 16:02:49.169194] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:55.671 [2024-07-15 16:02:49.169209] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:55.671 [2024-07-15 16:02:49.169214] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:55.671 [2024-07-15 16:02:49.169219] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x68ba60): datao=0, datal=4096, cccid=0 00:16:55.671 [2024-07-15 16:02:49.169224] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6ce840) on tqpair(0x68ba60): expected_datao=0, payload_size=4096 00:16:55.671 [2024-07-15 16:02:49.169230] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.671 [2024-07-15 16:02:49.169238] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:55.671 [2024-07-15 16:02:49.169242] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:55.671 [2024-07-15 16:02:49.169252] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.671 [2024-07-15 16:02:49.169258] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.671 [2024-07-15 16:02:49.169262] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.671 [2024-07-15 16:02:49.169266] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce840) on tqpair=0x68ba60 00:16:55.671 [2024-07-15 16:02:49.169275] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:16:55.671 [2024-07-15 16:02:49.169281] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:16:55.671 [2024-07-15 16:02:49.169286] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:16:55.671 [2024-07-15 16:02:49.169291] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:16:55.671 [2024-07-15 16:02:49.169296] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:16:55.671 [2024-07-15 16:02:49.169301] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:16:55.671 [2024-07-15 16:02:49.169311] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:16:55.671 [2024-07-15 16:02:49.169319] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.671 [2024-07-15 16:02:49.169324] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.671 [2024-07-15 16:02:49.169328] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x68ba60) 00:16:55.671 [2024-07-15 16:02:49.169336] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:55.671 [2024-07-15 16:02:49.169359] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce840, cid 0, qid 0 00:16:55.671 [2024-07-15 16:02:49.169584] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.671 [2024-07-15 16:02:49.169598] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.671 [2024-07-15 16:02:49.169602] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.671 [2024-07-15 16:02:49.169607] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce840) on tqpair=0x68ba60 00:16:55.671 [2024-07-15 16:02:49.169615] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.671 [2024-07-15 16:02:49.169620] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.671 [2024-07-15 16:02:49.169624] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x68ba60) 00:16:55.671 [2024-07-15 16:02:49.169631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.671 [2024-07-15 16:02:49.169638] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.671 [2024-07-15 16:02:49.169642] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.671 [2024-07-15 16:02:49.169646] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x68ba60) 00:16:55.671 [2024-07-15 16:02:49.169652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.671 [2024-07-15 16:02:49.169659] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.671 [2024-07-15 16:02:49.169663] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.671 [2024-07-15 16:02:49.169667] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x68ba60) 00:16:55.671 [2024-07-15 16:02:49.169673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.671 [2024-07-15 16:02:49.169680] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.671 [2024-07-15 16:02:49.169684] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.671 [2024-07-15 16:02:49.169688] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x68ba60) 00:16:55.671 [2024-07-15 16:02:49.169694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.671 [2024-07-15 16:02:49.169699] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:55.671 [2024-07-15 16:02:49.169714] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:55.671 [2024-07-15 16:02:49.169721] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.671 [2024-07-15 16:02:49.169726] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x68ba60) 00:16:55.671 [2024-07-15 16:02:49.169733] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.671 [2024-07-15 16:02:49.169756] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce840, cid 0, qid 0 00:16:55.671 [2024-07-15 16:02:49.169763] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce9c0, cid 1, qid 0 00:16:55.671 [2024-07-15 16:02:49.169768] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ceb40, cid 2, qid 0 00:16:55.671 [2024-07-15 16:02:49.169774] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cecc0, cid 3, qid 0 00:16:55.671 [2024-07-15 16:02:49.169779] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cee40, cid 4, qid 0 00:16:55.671 [2024-07-15 16:02:49.170383] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.671 [2024-07-15 16:02:49.170399] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.671 [2024-07-15 16:02:49.170404] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.671 [2024-07-15 16:02:49.170408] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cee40) on tqpair=0x68ba60 00:16:55.671 [2024-07-15 16:02:49.170414] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:16:55.671 [2024-07-15 16:02:49.170424] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:55.671 [2024-07-15 16:02:49.170434] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:16:55.671 [2024-07-15 16:02:49.170441] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:55.671 [2024-07-15 16:02:49.170449] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.671 [2024-07-15 16:02:49.170453] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.671 [2024-07-15 16:02:49.170458] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x68ba60) 00:16:55.671 [2024-07-15 16:02:49.170465] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:55.671 [2024-07-15 16:02:49.170489] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cee40, cid 4, qid 0 00:16:55.671 [2024-07-15 16:02:49.170767] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.671 [2024-07-15 16:02:49.170782] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.671 [2024-07-15 16:02:49.170786] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.671 [2024-07-15 16:02:49.170791] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cee40) on tqpair=0x68ba60 00:16:55.672 [2024-07-15 16:02:49.170853] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:16:55.672 [2024-07-15 16:02:49.170865] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:55.672 [2024-07-15 16:02:49.170873] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.672 [2024-07-15 16:02:49.170878] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x68ba60) 00:16:55.672 [2024-07-15 16:02:49.170886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.672 [2024-07-15 16:02:49.170907] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cee40, cid 4, qid 0 00:16:55.672 [2024-07-15 16:02:49.175007] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:55.672 [2024-07-15 16:02:49.175028] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:55.672 [2024-07-15 16:02:49.175033] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:55.672 [2024-07-15 16:02:49.175037] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x68ba60): datao=0, datal=4096, cccid=4 00:16:55.672 [2024-07-15 16:02:49.175043] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6cee40) on tqpair(0x68ba60): expected_datao=0, payload_size=4096 00:16:55.672 [2024-07-15 16:02:49.175048] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.672 [2024-07-15 16:02:49.175055] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:55.672 [2024-07-15 16:02:49.175060] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:55.672 [2024-07-15 16:02:49.175066] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.672 [2024-07-15 16:02:49.175072] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.672 [2024-07-15 16:02:49.175076] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.672 [2024-07-15 16:02:49.175080] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cee40) on tqpair=0x68ba60 00:16:55.672 [2024-07-15 16:02:49.175099] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:16:55.672 [2024-07-15 16:02:49.175111] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:16:55.672 [2024-07-15 16:02:49.175123] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:16:55.672 [2024-07-15 16:02:49.175132] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.672 [2024-07-15 16:02:49.175137] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x68ba60) 00:16:55.672 [2024-07-15 16:02:49.175145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.672 [2024-07-15 16:02:49.175172] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cee40, cid 4, qid 0 00:16:55.672 [2024-07-15 16:02:49.175568] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:55.672 [2024-07-15 16:02:49.175583] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:55.672 [2024-07-15 16:02:49.175588] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:55.672 [2024-07-15 16:02:49.175592] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x68ba60): datao=0, datal=4096, cccid=4 00:16:55.672 [2024-07-15 16:02:49.175597] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6cee40) on tqpair(0x68ba60): expected_datao=0, payload_size=4096 00:16:55.672 [2024-07-15 16:02:49.175603] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.672 [2024-07-15 16:02:49.175610] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:55.672 [2024-07-15 16:02:49.175614] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:55.672 [2024-07-15 16:02:49.175623] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.672 [2024-07-15 16:02:49.175629] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.672 [2024-07-15 16:02:49.175633] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.672 [2024-07-15 16:02:49.175638] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cee40) on tqpair=0x68ba60 00:16:55.672 [2024-07-15 16:02:49.175655] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:55.672 [2024-07-15 16:02:49.175666] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:55.672 [2024-07-15 16:02:49.175676] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.672 [2024-07-15 16:02:49.175680] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x68ba60) 00:16:55.672 [2024-07-15 16:02:49.175688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.672 [2024-07-15 16:02:49.175711] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cee40, cid 4, qid 0 00:16:55.672 [2024-07-15 16:02:49.176141] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:55.672 [2024-07-15 16:02:49.176156] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:55.672 [2024-07-15 16:02:49.176161] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:55.672 [2024-07-15 16:02:49.176165] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x68ba60): datao=0, datal=4096, cccid=4 00:16:55.672 [2024-07-15 16:02:49.176170] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6cee40) on tqpair(0x68ba60): expected_datao=0, payload_size=4096 00:16:55.672 [2024-07-15 16:02:49.176175] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.672 [2024-07-15 16:02:49.176183] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:55.672 [2024-07-15 16:02:49.176187] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:55.672 [2024-07-15 16:02:49.176195] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.672 [2024-07-15 16:02:49.176202] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.672 [2024-07-15 16:02:49.176206] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.672 [2024-07-15 16:02:49.176210] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cee40) on tqpair=0x68ba60 00:16:55.672 [2024-07-15 16:02:49.176219] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:55.672 [2024-07-15 16:02:49.176229] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:16:55.672 [2024-07-15 16:02:49.176240] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:16:55.672 [2024-07-15 16:02:49.176247] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:16:55.672 [2024-07-15 16:02:49.176253] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:55.672 [2024-07-15 16:02:49.176258] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:16:55.672 [2024-07-15 16:02:49.176264] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:16:55.672 [2024-07-15 16:02:49.176279] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:16:55.672 [2024-07-15 16:02:49.176285] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:16:55.672 [2024-07-15 16:02:49.176302] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.672 [2024-07-15 16:02:49.176307] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x68ba60) 00:16:55.672 [2024-07-15 16:02:49.176315] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.672 [2024-07-15 16:02:49.176323] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.672 [2024-07-15 16:02:49.176327] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.672 [2024-07-15 16:02:49.176331] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x68ba60) 00:16:55.672 [2024-07-15 16:02:49.176337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.672 [2024-07-15 16:02:49.176366] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cee40, cid 4, qid 0 00:16:55.672 [2024-07-15 16:02:49.176375] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cefc0, cid 5, qid 0 00:16:55.672 [2024-07-15 16:02:49.176738] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.672 [2024-07-15 16:02:49.176752] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.672 [2024-07-15 16:02:49.176757] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.672 [2024-07-15 16:02:49.176778] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cee40) on tqpair=0x68ba60 00:16:55.672 [2024-07-15 16:02:49.176785] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.672 [2024-07-15 16:02:49.176791] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.672 [2024-07-15 16:02:49.176795] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.672 [2024-07-15 16:02:49.176799] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cefc0) on tqpair=0x68ba60 00:16:55.672 [2024-07-15 16:02:49.176810] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.672 [2024-07-15 16:02:49.176815] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x68ba60) 00:16:55.672 [2024-07-15 16:02:49.176823] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.672 [2024-07-15 16:02:49.176843] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cefc0, cid 5, qid 0 00:16:55.672 [2024-07-15 16:02:49.176907] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.672 [2024-07-15 16:02:49.176914] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.672 [2024-07-15 16:02:49.176918] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.672 [2024-07-15 16:02:49.176922] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cefc0) on tqpair=0x68ba60 00:16:55.672 [2024-07-15 16:02:49.176933] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.672 [2024-07-15 16:02:49.176937] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x68ba60) 00:16:55.672 [2024-07-15 16:02:49.176945] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.672 [2024-07-15 16:02:49.176987] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cefc0, cid 5, qid 0 00:16:55.672 [2024-07-15 16:02:49.177508] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.672 [2024-07-15 16:02:49.177522] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.672 [2024-07-15 16:02:49.177527] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.672 [2024-07-15 16:02:49.177531] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cefc0) on tqpair=0x68ba60 00:16:55.672 [2024-07-15 16:02:49.177543] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.672 [2024-07-15 16:02:49.177548] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x68ba60) 00:16:55.672 [2024-07-15 16:02:49.177555] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.672 [2024-07-15 16:02:49.177576] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cefc0, cid 5, qid 0 00:16:55.672 [2024-07-15 16:02:49.177830] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.672 [2024-07-15 16:02:49.177851] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.672 [2024-07-15 16:02:49.177857] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.672 [2024-07-15 16:02:49.177861] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cefc0) on tqpair=0x68ba60 00:16:55.672 [2024-07-15 16:02:49.177881] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.673 [2024-07-15 16:02:49.177887] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x68ba60) 00:16:55.673 [2024-07-15 16:02:49.177895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.673 [2024-07-15 16:02:49.177903] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.673 [2024-07-15 16:02:49.177907] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x68ba60) 00:16:55.673 [2024-07-15 16:02:49.177914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.673 [2024-07-15 16:02:49.177922] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.673 [2024-07-15 16:02:49.177926] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x68ba60) 00:16:55.673 [2024-07-15 16:02:49.177932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.673 [2024-07-15 16:02:49.177944] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.673 [2024-07-15 16:02:49.177949] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x68ba60) 00:16:55.673 [2024-07-15 16:02:49.177955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.673 [2024-07-15 16:02:49.177990] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cefc0, cid 5, qid 0 00:16:55.673 [2024-07-15 16:02:49.177998] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cee40, cid 4, qid 0 00:16:55.673 [2024-07-15 16:02:49.178003] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cf140, cid 6, qid 0 00:16:55.673 [2024-07-15 16:02:49.178008] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cf2c0, cid 7, qid 0 00:16:55.673 [2024-07-15 16:02:49.178474] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:55.673 [2024-07-15 16:02:49.178488] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:55.673 [2024-07-15 16:02:49.178510] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:55.673 [2024-07-15 16:02:49.178514] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x68ba60): datao=0, datal=8192, cccid=5 00:16:55.673 [2024-07-15 16:02:49.178519] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6cefc0) on tqpair(0x68ba60): expected_datao=0, payload_size=8192 00:16:55.673 [2024-07-15 16:02:49.178524] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.673 [2024-07-15 16:02:49.178541] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:55.673 [2024-07-15 16:02:49.178546] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:55.673 [2024-07-15 16:02:49.178552] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:55.673 [2024-07-15 16:02:49.178559] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:55.673 [2024-07-15 16:02:49.178562] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:55.673 [2024-07-15 16:02:49.178566] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x68ba60): datao=0, datal=512, cccid=4 00:16:55.673 [2024-07-15 16:02:49.178571] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6cee40) on tqpair(0x68ba60): expected_datao=0, payload_size=512 00:16:55.673 [2024-07-15 16:02:49.178576] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.673 [2024-07-15 16:02:49.178582] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:55.673 [2024-07-15 16:02:49.178586] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:55.673 [2024-07-15 16:02:49.178592] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:55.673 [2024-07-15 16:02:49.178598] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:55.673 [2024-07-15 16:02:49.178602] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:55.673 [2024-07-15 16:02:49.178605] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x68ba60): datao=0, datal=512, cccid=6 00:16:55.673 [2024-07-15 16:02:49.178610] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6cf140) on tqpair(0x68ba60): expected_datao=0, payload_size=512 00:16:55.673 [2024-07-15 16:02:49.178615] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.673 [2024-07-15 16:02:49.178633] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:55.673 [2024-07-15 16:02:49.178637] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:55.673 [2024-07-15 16:02:49.178643] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:55.673 [2024-07-15 16:02:49.178648] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:55.673 [2024-07-15 16:02:49.178652] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:55.673 [2024-07-15 16:02:49.178656] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x68ba60): datao=0, datal=4096, cccid=7 00:16:55.673 [2024-07-15 16:02:49.178660] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6cf2c0) on tqpair(0x68ba60): expected_datao=0, payload_size=4096 00:16:55.673 [2024-07-15 16:02:49.178665] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.673 [2024-07-15 16:02:49.178672] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:55.673 [2024-07-15 16:02:49.178676] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:55.673 [2024-07-15 16:02:49.178684] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.673 [2024-07-15 16:02:49.178690] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.673 [2024-07-15 16:02:49.178693] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.673 [2024-07-15 16:02:49.178698] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cefc0) on tqpair=0x68ba60 00:16:55.673 [2024-07-15 16:02:49.178717] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.673 [2024-07-15 16:02:49.178724] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.673 [2024-07-15 16:02:49.178728] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.673 [2024-07-15 16:02:49.178732] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cee40) on tqpair=0x68ba60 00:16:55.673 [2024-07-15 16:02:49.178745] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.673 [2024-07-15 16:02:49.178752] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.673 [2024-07-15 16:02:49.178756] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.673 [2024-07-15 16:02:49.178760] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cf140) on tqpair=0x68ba60 00:16:55.673 [2024-07-15 16:02:49.178768] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.673 [2024-07-15 16:02:49.178774] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.673 [2024-07-15 16:02:49.178778] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.673 [2024-07-15 16:02:49.178782] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cf2c0) on tqpair=0x68ba60 00:16:55.673 ===================================================== 00:16:55.673 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:55.673 ===================================================== 00:16:55.673 Controller Capabilities/Features 00:16:55.673 ================================ 00:16:55.673 Vendor ID: 8086 00:16:55.673 Subsystem Vendor ID: 8086 00:16:55.673 Serial Number: SPDK00000000000001 00:16:55.673 Model Number: SPDK bdev Controller 00:16:55.673 Firmware Version: 24.09 00:16:55.673 Recommended Arb Burst: 6 00:16:55.673 IEEE OUI Identifier: e4 d2 5c 00:16:55.673 Multi-path I/O 00:16:55.673 May have multiple subsystem ports: Yes 00:16:55.673 May have multiple controllers: Yes 00:16:55.673 Associated with SR-IOV VF: No 00:16:55.673 Max Data Transfer Size: 131072 00:16:55.673 Max Number of Namespaces: 32 00:16:55.673 Max Number of I/O Queues: 127 00:16:55.673 NVMe Specification Version (VS): 1.3 00:16:55.673 NVMe Specification Version (Identify): 1.3 00:16:55.673 Maximum Queue Entries: 128 00:16:55.673 Contiguous Queues Required: Yes 00:16:55.673 Arbitration Mechanisms Supported 00:16:55.673 Weighted Round Robin: Not Supported 00:16:55.673 Vendor Specific: Not Supported 00:16:55.673 Reset Timeout: 15000 ms 00:16:55.673 Doorbell Stride: 4 bytes 00:16:55.673 NVM Subsystem Reset: Not Supported 00:16:55.673 Command Sets Supported 00:16:55.673 NVM Command Set: Supported 00:16:55.673 Boot Partition: Not Supported 00:16:55.673 Memory Page Size Minimum: 4096 bytes 00:16:55.673 Memory Page Size Maximum: 4096 bytes 00:16:55.673 Persistent Memory Region: Not Supported 00:16:55.673 Optional Asynchronous Events Supported 00:16:55.673 Namespace Attribute Notices: Supported 00:16:55.673 Firmware Activation Notices: Not Supported 00:16:55.673 ANA Change Notices: Not Supported 00:16:55.673 PLE Aggregate Log Change Notices: Not Supported 00:16:55.673 LBA Status Info Alert Notices: Not Supported 00:16:55.673 EGE Aggregate Log Change Notices: Not Supported 00:16:55.673 Normal NVM Subsystem Shutdown event: Not Supported 00:16:55.673 Zone Descriptor Change Notices: Not Supported 00:16:55.673 Discovery Log Change Notices: Not Supported 00:16:55.673 Controller Attributes 00:16:55.673 128-bit Host Identifier: Supported 00:16:55.673 Non-Operational Permissive Mode: Not Supported 00:16:55.673 NVM Sets: Not Supported 00:16:55.673 Read Recovery Levels: Not Supported 00:16:55.673 Endurance Groups: Not Supported 00:16:55.673 Predictable Latency Mode: Not Supported 00:16:55.673 Traffic Based Keep ALive: Not Supported 00:16:55.673 Namespace Granularity: Not Supported 00:16:55.673 SQ Associations: Not Supported 00:16:55.673 UUID List: Not Supported 00:16:55.673 Multi-Domain Subsystem: Not Supported 00:16:55.673 Fixed Capacity Management: Not Supported 00:16:55.673 Variable Capacity Management: Not Supported 00:16:55.673 Delete Endurance Group: Not Supported 00:16:55.673 Delete NVM Set: Not Supported 00:16:55.673 Extended LBA Formats Supported: Not Supported 00:16:55.673 Flexible Data Placement Supported: Not Supported 00:16:55.673 00:16:55.673 Controller Memory Buffer Support 00:16:55.673 ================================ 00:16:55.673 Supported: No 00:16:55.673 00:16:55.673 Persistent Memory Region Support 00:16:55.673 ================================ 00:16:55.673 Supported: No 00:16:55.673 00:16:55.673 Admin Command Set Attributes 00:16:55.673 ============================ 00:16:55.673 Security Send/Receive: Not Supported 00:16:55.673 Format NVM: Not Supported 00:16:55.673 Firmware Activate/Download: Not Supported 00:16:55.673 Namespace Management: Not Supported 00:16:55.673 Device Self-Test: Not Supported 00:16:55.673 Directives: Not Supported 00:16:55.673 NVMe-MI: Not Supported 00:16:55.673 Virtualization Management: Not Supported 00:16:55.673 Doorbell Buffer Config: Not Supported 00:16:55.673 Get LBA Status Capability: Not Supported 00:16:55.673 Command & Feature Lockdown Capability: Not Supported 00:16:55.673 Abort Command Limit: 4 00:16:55.674 Async Event Request Limit: 4 00:16:55.674 Number of Firmware Slots: N/A 00:16:55.674 Firmware Slot 1 Read-Only: N/A 00:16:55.674 Firmware Activation Without Reset: N/A 00:16:55.674 Multiple Update Detection Support: N/A 00:16:55.674 Firmware Update Granularity: No Information Provided 00:16:55.674 Per-Namespace SMART Log: No 00:16:55.674 Asymmetric Namespace Access Log Page: Not Supported 00:16:55.674 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:16:55.674 Command Effects Log Page: Supported 00:16:55.674 Get Log Page Extended Data: Supported 00:16:55.674 Telemetry Log Pages: Not Supported 00:16:55.674 Persistent Event Log Pages: Not Supported 00:16:55.674 Supported Log Pages Log Page: May Support 00:16:55.674 Commands Supported & Effects Log Page: Not Supported 00:16:55.674 Feature Identifiers & Effects Log Page:May Support 00:16:55.674 NVMe-MI Commands & Effects Log Page: May Support 00:16:55.674 Data Area 4 for Telemetry Log: Not Supported 00:16:55.674 Error Log Page Entries Supported: 128 00:16:55.674 Keep Alive: Supported 00:16:55.674 Keep Alive Granularity: 10000 ms 00:16:55.674 00:16:55.674 NVM Command Set Attributes 00:16:55.674 ========================== 00:16:55.674 Submission Queue Entry Size 00:16:55.674 Max: 64 00:16:55.674 Min: 64 00:16:55.674 Completion Queue Entry Size 00:16:55.674 Max: 16 00:16:55.674 Min: 16 00:16:55.674 Number of Namespaces: 32 00:16:55.674 Compare Command: Supported 00:16:55.674 Write Uncorrectable Command: Not Supported 00:16:55.674 Dataset Management Command: Supported 00:16:55.674 Write Zeroes Command: Supported 00:16:55.674 Set Features Save Field: Not Supported 00:16:55.674 Reservations: Supported 00:16:55.674 Timestamp: Not Supported 00:16:55.674 Copy: Supported 00:16:55.674 Volatile Write Cache: Present 00:16:55.674 Atomic Write Unit (Normal): 1 00:16:55.674 Atomic Write Unit (PFail): 1 00:16:55.674 Atomic Compare & Write Unit: 1 00:16:55.674 Fused Compare & Write: Supported 00:16:55.674 Scatter-Gather List 00:16:55.674 SGL Command Set: Supported 00:16:55.674 SGL Keyed: Supported 00:16:55.674 SGL Bit Bucket Descriptor: Not Supported 00:16:55.674 SGL Metadata Pointer: Not Supported 00:16:55.674 Oversized SGL: Not Supported 00:16:55.674 SGL Metadata Address: Not Supported 00:16:55.674 SGL Offset: Supported 00:16:55.674 Transport SGL Data Block: Not Supported 00:16:55.674 Replay Protected Memory Block: Not Supported 00:16:55.674 00:16:55.674 Firmware Slot Information 00:16:55.674 ========================= 00:16:55.674 Active slot: 1 00:16:55.674 Slot 1 Firmware Revision: 24.09 00:16:55.674 00:16:55.674 00:16:55.674 Commands Supported and Effects 00:16:55.674 ============================== 00:16:55.674 Admin Commands 00:16:55.674 -------------- 00:16:55.674 Get Log Page (02h): Supported 00:16:55.674 Identify (06h): Supported 00:16:55.674 Abort (08h): Supported 00:16:55.674 Set Features (09h): Supported 00:16:55.674 Get Features (0Ah): Supported 00:16:55.674 Asynchronous Event Request (0Ch): Supported 00:16:55.674 Keep Alive (18h): Supported 00:16:55.674 I/O Commands 00:16:55.674 ------------ 00:16:55.674 Flush (00h): Supported LBA-Change 00:16:55.674 Write (01h): Supported LBA-Change 00:16:55.674 Read (02h): Supported 00:16:55.674 Compare (05h): Supported 00:16:55.674 Write Zeroes (08h): Supported LBA-Change 00:16:55.674 Dataset Management (09h): Supported LBA-Change 00:16:55.674 Copy (19h): Supported LBA-Change 00:16:55.674 00:16:55.674 Error Log 00:16:55.674 ========= 00:16:55.674 00:16:55.674 Arbitration 00:16:55.674 =========== 00:16:55.674 Arbitration Burst: 1 00:16:55.674 00:16:55.674 Power Management 00:16:55.674 ================ 00:16:55.674 Number of Power States: 1 00:16:55.674 Current Power State: Power State #0 00:16:55.674 Power State #0: 00:16:55.674 Max Power: 0.00 W 00:16:55.674 Non-Operational State: Operational 00:16:55.674 Entry Latency: Not Reported 00:16:55.674 Exit Latency: Not Reported 00:16:55.674 Relative Read Throughput: 0 00:16:55.674 Relative Read Latency: 0 00:16:55.674 Relative Write Throughput: 0 00:16:55.674 Relative Write Latency: 0 00:16:55.674 Idle Power: Not Reported 00:16:55.674 Active Power: Not Reported 00:16:55.674 Non-Operational Permissive Mode: Not Supported 00:16:55.674 00:16:55.674 Health Information 00:16:55.674 ================== 00:16:55.674 Critical Warnings: 00:16:55.674 Available Spare Space: OK 00:16:55.674 Temperature: OK 00:16:55.674 Device Reliability: OK 00:16:55.674 Read Only: No 00:16:55.674 Volatile Memory Backup: OK 00:16:55.674 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:55.674 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:55.674 Available Spare: 0% 00:16:55.674 Available Spare Threshold: 0% 00:16:55.674 Life Percentage Used:[2024-07-15 16:02:49.178904] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.674 [2024-07-15 16:02:49.178911] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x68ba60) 00:16:55.674 [2024-07-15 16:02:49.178920] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.674 [2024-07-15 16:02:49.178945] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cf2c0, cid 7, qid 0 00:16:55.674 [2024-07-15 16:02:49.183001] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.674 [2024-07-15 16:02:49.183022] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.674 [2024-07-15 16:02:49.183027] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.674 [2024-07-15 16:02:49.183032] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cf2c0) on tqpair=0x68ba60 00:16:55.674 [2024-07-15 16:02:49.183075] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:16:55.674 [2024-07-15 16:02:49.183088] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce840) on tqpair=0x68ba60 00:16:55.674 [2024-07-15 16:02:49.183096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.674 [2024-07-15 16:02:49.183102] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce9c0) on tqpair=0x68ba60 00:16:55.674 [2024-07-15 16:02:49.183107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.674 [2024-07-15 16:02:49.183113] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ceb40) on tqpair=0x68ba60 00:16:55.674 [2024-07-15 16:02:49.183118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.674 [2024-07-15 16:02:49.183123] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cecc0) on tqpair=0x68ba60 00:16:55.674 [2024-07-15 16:02:49.183128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.674 [2024-07-15 16:02:49.183138] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.674 [2024-07-15 16:02:49.183142] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.674 [2024-07-15 16:02:49.183150] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x68ba60) 00:16:55.674 [2024-07-15 16:02:49.183159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.674 [2024-07-15 16:02:49.183198] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cecc0, cid 3, qid 0 00:16:55.674 [2024-07-15 16:02:49.183581] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.674 [2024-07-15 16:02:49.183597] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.674 [2024-07-15 16:02:49.183602] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.674 [2024-07-15 16:02:49.183606] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cecc0) on tqpair=0x68ba60 00:16:55.674 [2024-07-15 16:02:49.183615] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.674 [2024-07-15 16:02:49.183619] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.674 [2024-07-15 16:02:49.183624] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x68ba60) 00:16:55.674 [2024-07-15 16:02:49.183631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.675 [2024-07-15 16:02:49.183656] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cecc0, cid 3, qid 0 00:16:55.675 [2024-07-15 16:02:49.183977] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.675 [2024-07-15 16:02:49.183991] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.675 [2024-07-15 16:02:49.183996] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.675 [2024-07-15 16:02:49.184000] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cecc0) on tqpair=0x68ba60 00:16:55.675 [2024-07-15 16:02:49.184006] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:16:55.675 [2024-07-15 16:02:49.184011] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:16:55.675 [2024-07-15 16:02:49.184022] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.675 [2024-07-15 16:02:49.184027] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.675 [2024-07-15 16:02:49.184031] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x68ba60) 00:16:55.675 [2024-07-15 16:02:49.184039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.675 [2024-07-15 16:02:49.184060] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cecc0, cid 3, qid 0 00:16:55.675 [2024-07-15 16:02:49.184121] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.675 [2024-07-15 16:02:49.184128] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.675 [2024-07-15 16:02:49.184131] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.675 [2024-07-15 16:02:49.184136] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cecc0) on tqpair=0x68ba60 00:16:55.675 [2024-07-15 16:02:49.184147] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.675 [2024-07-15 16:02:49.184152] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.675 [2024-07-15 16:02:49.184156] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x68ba60) 00:16:55.675 [2024-07-15 16:02:49.184163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.675 [2024-07-15 16:02:49.184181] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cecc0, cid 3, qid 0 00:16:55.675 [2024-07-15 16:02:49.184466] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.675 [2024-07-15 16:02:49.184479] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.675 [2024-07-15 16:02:49.184484] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.675 [2024-07-15 16:02:49.184489] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cecc0) on tqpair=0x68ba60 00:16:55.675 [2024-07-15 16:02:49.184500] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.675 [2024-07-15 16:02:49.184505] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.675 [2024-07-15 16:02:49.184509] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x68ba60) 00:16:55.675 [2024-07-15 16:02:49.184516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.675 [2024-07-15 16:02:49.184535] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cecc0, cid 3, qid 0 00:16:55.675 [2024-07-15 16:02:49.184699] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.675 [2024-07-15 16:02:49.184712] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.675 [2024-07-15 16:02:49.184716] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.675 [2024-07-15 16:02:49.184721] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cecc0) on tqpair=0x68ba60 00:16:55.675 [2024-07-15 16:02:49.184732] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.675 [2024-07-15 16:02:49.184736] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.675 [2024-07-15 16:02:49.184740] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x68ba60) 00:16:55.675 [2024-07-15 16:02:49.184748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.675 [2024-07-15 16:02:49.184765] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cecc0, cid 3, qid 0 00:16:55.675 [2024-07-15 16:02:49.185207] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.675 [2024-07-15 16:02:49.185220] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.675 [2024-07-15 16:02:49.185225] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.675 [2024-07-15 16:02:49.185229] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cecc0) on tqpair=0x68ba60 00:16:55.675 [2024-07-15 16:02:49.185240] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.675 [2024-07-15 16:02:49.185246] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.675 [2024-07-15 16:02:49.185250] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x68ba60) 00:16:55.675 [2024-07-15 16:02:49.185257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.675 [2024-07-15 16:02:49.185278] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cecc0, cid 3, qid 0 00:16:55.675 [2024-07-15 16:02:49.185348] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.675 [2024-07-15 16:02:49.185355] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.675 [2024-07-15 16:02:49.185358] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.675 [2024-07-15 16:02:49.185363] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cecc0) on tqpair=0x68ba60 00:16:55.675 [2024-07-15 16:02:49.185388] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.675 [2024-07-15 16:02:49.185392] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.675 [2024-07-15 16:02:49.185396] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x68ba60) 00:16:55.675 [2024-07-15 16:02:49.185404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.675 [2024-07-15 16:02:49.185419] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cecc0, cid 3, qid 0 00:16:55.675 [2024-07-15 16:02:49.185774] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.675 [2024-07-15 16:02:49.185787] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.675 [2024-07-15 16:02:49.185792] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.675 [2024-07-15 16:02:49.185796] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cecc0) on tqpair=0x68ba60 00:16:55.675 [2024-07-15 16:02:49.185807] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.675 [2024-07-15 16:02:49.185812] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.675 [2024-07-15 16:02:49.185816] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x68ba60) 00:16:55.675 [2024-07-15 16:02:49.185823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.675 [2024-07-15 16:02:49.185850] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cecc0, cid 3, qid 0 00:16:55.675 [2024-07-15 16:02:49.185941] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.675 [2024-07-15 16:02:49.185948] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.675 [2024-07-15 16:02:49.185952] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.675 [2024-07-15 16:02:49.185957] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cecc0) on tqpair=0x68ba60 00:16:55.675 [2024-07-15 16:02:49.185967] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.675 [2024-07-15 16:02:49.185972] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.675 [2024-07-15 16:02:49.185976] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x68ba60) 00:16:55.675 [2024-07-15 16:02:49.185993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.675 [2024-07-15 16:02:49.186015] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cecc0, cid 3, qid 0 00:16:55.675 [2024-07-15 16:02:49.186504] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.675 [2024-07-15 16:02:49.186517] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.675 [2024-07-15 16:02:49.186521] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.675 [2024-07-15 16:02:49.186526] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cecc0) on tqpair=0x68ba60 00:16:55.675 [2024-07-15 16:02:49.186537] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.675 [2024-07-15 16:02:49.186541] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.675 [2024-07-15 16:02:49.186546] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x68ba60) 00:16:55.675 [2024-07-15 16:02:49.186553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.675 [2024-07-15 16:02:49.186572] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cecc0, cid 3, qid 0 00:16:55.675 [2024-07-15 16:02:49.186646] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.675 [2024-07-15 16:02:49.186652] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.675 [2024-07-15 16:02:49.186656] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.675 [2024-07-15 16:02:49.186660] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cecc0) on tqpair=0x68ba60 00:16:55.675 [2024-07-15 16:02:49.186670] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.675 [2024-07-15 16:02:49.186674] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.675 [2024-07-15 16:02:49.186678] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x68ba60) 00:16:55.675 [2024-07-15 16:02:49.186685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.675 [2024-07-15 16:02:49.186718] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cecc0, cid 3, qid 0 00:16:55.675 [2024-07-15 16:02:49.191006] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.675 [2024-07-15 16:02:49.191027] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.675 [2024-07-15 16:02:49.191032] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.675 [2024-07-15 16:02:49.191036] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cecc0) on tqpair=0x68ba60 00:16:55.675 [2024-07-15 16:02:49.191050] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:55.675 [2024-07-15 16:02:49.191056] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:55.675 [2024-07-15 16:02:49.191060] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x68ba60) 00:16:55.675 [2024-07-15 16:02:49.191069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.675 [2024-07-15 16:02:49.191095] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cecc0, cid 3, qid 0 00:16:55.675 [2024-07-15 16:02:49.191158] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:55.675 [2024-07-15 16:02:49.191166] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:55.675 [2024-07-15 16:02:49.191169] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:55.675 [2024-07-15 16:02:49.191174] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cecc0) on tqpair=0x68ba60 00:16:55.675 [2024-07-15 16:02:49.191182] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:16:55.675 0% 00:16:55.675 Data Units Read: 0 00:16:55.675 Data Units Written: 0 00:16:55.675 Host Read Commands: 0 00:16:55.675 Host Write Commands: 0 00:16:55.675 Controller Busy Time: 0 minutes 00:16:55.675 Power Cycles: 0 00:16:55.675 Power On Hours: 0 hours 00:16:55.675 Unsafe Shutdowns: 0 00:16:55.675 Unrecoverable Media Errors: 0 00:16:55.675 Lifetime Error Log Entries: 0 00:16:55.675 Warning Temperature Time: 0 minutes 00:16:55.675 Critical Temperature Time: 0 minutes 00:16:55.675 00:16:55.676 Number of Queues 00:16:55.676 ================ 00:16:55.676 Number of I/O Submission Queues: 127 00:16:55.676 Number of I/O Completion Queues: 127 00:16:55.676 00:16:55.676 Active Namespaces 00:16:55.676 ================= 00:16:55.676 Namespace ID:1 00:16:55.676 Error Recovery Timeout: Unlimited 00:16:55.676 Command Set Identifier: NVM (00h) 00:16:55.676 Deallocate: Supported 00:16:55.676 Deallocated/Unwritten Error: Not Supported 00:16:55.676 Deallocated Read Value: Unknown 00:16:55.676 Deallocate in Write Zeroes: Not Supported 00:16:55.676 Deallocated Guard Field: 0xFFFF 00:16:55.676 Flush: Supported 00:16:55.676 Reservation: Supported 00:16:55.676 Namespace Sharing Capabilities: Multiple Controllers 00:16:55.676 Size (in LBAs): 131072 (0GiB) 00:16:55.676 Capacity (in LBAs): 131072 (0GiB) 00:16:55.676 Utilization (in LBAs): 131072 (0GiB) 00:16:55.676 NGUID: ABCDEF0123456789ABCDEF0123456789 00:16:55.676 EUI64: ABCDEF0123456789 00:16:55.676 UUID: 38122000-adc7-461c-bfa0-39a97a67270c 00:16:55.676 Thin Provisioning: Not Supported 00:16:55.676 Per-NS Atomic Units: Yes 00:16:55.676 Atomic Boundary Size (Normal): 0 00:16:55.676 Atomic Boundary Size (PFail): 0 00:16:55.676 Atomic Boundary Offset: 0 00:16:55.676 Maximum Single Source Range Length: 65535 00:16:55.676 Maximum Copy Length: 65535 00:16:55.676 Maximum Source Range Count: 1 00:16:55.676 NGUID/EUI64 Never Reused: No 00:16:55.676 Namespace Write Protected: No 00:16:55.676 Number of LBA Formats: 1 00:16:55.676 Current LBA Format: LBA Format #00 00:16:55.676 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:55.676 00:16:55.676 16:02:49 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:16:55.676 16:02:49 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:55.676 16:02:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.676 16:02:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:55.676 16:02:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.676 16:02:49 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:16:55.676 16:02:49 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:16:55.676 16:02:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:55.676 16:02:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:16:55.676 16:02:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:55.676 16:02:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:16:55.676 16:02:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:55.676 16:02:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:55.676 rmmod nvme_tcp 00:16:55.676 rmmod nvme_fabrics 00:16:55.676 rmmod nvme_keyring 00:16:55.676 16:02:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:55.676 16:02:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:16:55.676 16:02:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:16:55.676 16:02:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 87236 ']' 00:16:55.676 16:02:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 87236 00:16:55.676 16:02:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 87236 ']' 00:16:55.676 16:02:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 87236 00:16:55.676 16:02:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:16:55.676 16:02:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:55.676 16:02:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87236 00:16:55.676 16:02:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:55.676 16:02:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:55.676 killing process with pid 87236 00:16:55.676 16:02:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87236' 00:16:55.676 16:02:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 87236 00:16:55.676 16:02:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 87236 00:16:55.934 16:02:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:55.934 16:02:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:55.934 16:02:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:55.934 16:02:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:55.934 16:02:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:55.934 16:02:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:55.934 16:02:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:55.934 16:02:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:55.934 16:02:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:55.934 00:16:55.934 real 0m2.631s 00:16:55.934 user 0m7.352s 00:16:55.934 sys 0m0.670s 00:16:55.934 16:02:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:55.934 16:02:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:55.934 ************************************ 00:16:55.934 END TEST nvmf_identify 00:16:55.934 ************************************ 00:16:56.193 16:02:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:56.193 16:02:49 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:56.193 16:02:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:56.193 16:02:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:56.193 16:02:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:56.193 ************************************ 00:16:56.193 START TEST nvmf_perf 00:16:56.193 ************************************ 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:56.193 * Looking for test storage... 00:16:56.193 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:56.193 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:56.194 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:56.194 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:56.194 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:56.194 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:56.194 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:56.194 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:56.194 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:56.194 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:56.194 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:56.194 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:56.194 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:56.194 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:56.194 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:56.194 Cannot find device "nvmf_tgt_br" 00:16:56.194 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:16:56.194 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:56.194 Cannot find device "nvmf_tgt_br2" 00:16:56.194 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:16:56.194 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:56.194 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:56.194 Cannot find device "nvmf_tgt_br" 00:16:56.194 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:16:56.194 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:56.194 Cannot find device "nvmf_tgt_br2" 00:16:56.194 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:16:56.194 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:56.194 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:56.452 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:56.452 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:56.452 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:16:56.452 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:56.452 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:56.452 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:16:56.452 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:56.452 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:56.452 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:56.452 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:56.452 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:56.452 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:56.452 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:56.452 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:56.452 16:02:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:56.452 16:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:56.452 16:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:56.452 16:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:56.452 16:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:56.452 16:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:56.452 16:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:56.452 16:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:56.452 16:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:56.452 16:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:56.452 16:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:56.452 16:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:56.452 16:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:56.452 16:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:56.452 16:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:56.452 16:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:56.452 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:56.452 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:16:56.452 00:16:56.452 --- 10.0.0.2 ping statistics --- 00:16:56.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.452 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:16:56.452 16:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:56.452 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:56.452 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:16:56.452 00:16:56.452 --- 10.0.0.3 ping statistics --- 00:16:56.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.452 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:16:56.452 16:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:56.452 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:56.452 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:16:56.452 00:16:56.452 --- 10.0.0.1 ping statistics --- 00:16:56.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.452 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:16:56.452 16:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:56.452 16:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:16:56.452 16:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:56.452 16:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:56.452 16:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:56.452 16:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:56.452 16:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:56.452 16:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:56.452 16:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:56.452 16:02:50 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:16:56.452 16:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:56.452 16:02:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:56.452 16:02:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:56.452 16:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=87460 00:16:56.452 16:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:56.452 16:02:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 87460 00:16:56.452 16:02:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 87460 ']' 00:16:56.452 16:02:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.452 16:02:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:56.452 16:02:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.452 16:02:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:56.452 16:02:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:56.710 [2024-07-15 16:02:50.220112] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:16:56.710 [2024-07-15 16:02:50.220278] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.710 [2024-07-15 16:02:50.376273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:56.967 [2024-07-15 16:02:50.527775] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.967 [2024-07-15 16:02:50.527844] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.967 [2024-07-15 16:02:50.527859] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:56.967 [2024-07-15 16:02:50.527870] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:56.967 [2024-07-15 16:02:50.527880] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.967 [2024-07-15 16:02:50.528253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.967 [2024-07-15 16:02:50.528812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:56.967 [2024-07-15 16:02:50.529004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:56.967 [2024-07-15 16:02:50.529399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.532 16:02:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:57.532 16:02:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:16:57.532 16:02:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:57.532 16:02:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:57.532 16:02:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:57.789 16:02:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:57.789 16:02:51 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:57.789 16:02:51 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:16:58.048 16:02:51 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:16:58.048 16:02:51 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:16:58.306 16:02:51 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:16:58.306 16:02:51 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:58.565 16:02:52 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:16:58.565 16:02:52 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:16:58.565 16:02:52 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:16:58.565 16:02:52 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:16:58.565 16:02:52 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:58.823 [2024-07-15 16:02:52.429718] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:58.823 16:02:52 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:59.081 16:02:52 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:59.081 16:02:52 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:59.339 16:02:52 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:59.339 16:02:52 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:16:59.597 16:02:53 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:59.854 [2024-07-15 16:02:53.511159] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:59.854 16:02:53 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:00.112 16:02:53 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:00.112 16:02:53 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:00.112 16:02:53 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:00.112 16:02:53 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:01.506 Initializing NVMe Controllers 00:17:01.506 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:17:01.506 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:17:01.506 Initialization complete. Launching workers. 00:17:01.506 ======================================================== 00:17:01.506 Latency(us) 00:17:01.506 Device Information : IOPS MiB/s Average min max 00:17:01.506 PCIE (0000:00:10.0) NSID 1 from core 0: 24640.00 96.25 1298.34 316.65 8095.65 00:17:01.506 ======================================================== 00:17:01.506 Total : 24640.00 96.25 1298.34 316.65 8095.65 00:17:01.506 00:17:01.506 16:02:54 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:02.437 Initializing NVMe Controllers 00:17:02.437 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:02.437 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:02.437 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:02.437 Initialization complete. Launching workers. 00:17:02.437 ======================================================== 00:17:02.437 Latency(us) 00:17:02.437 Device Information : IOPS MiB/s Average min max 00:17:02.437 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3515.12 13.73 284.15 108.51 4242.63 00:17:02.437 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.76 0.48 8144.02 6010.78 12068.54 00:17:02.437 ======================================================== 00:17:02.437 Total : 3638.88 14.21 551.46 108.51 12068.54 00:17:02.437 00:17:02.695 16:02:56 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:04.070 Initializing NVMe Controllers 00:17:04.070 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:04.070 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:04.070 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:04.070 Initialization complete. Launching workers. 00:17:04.070 ======================================================== 00:17:04.070 Latency(us) 00:17:04.070 Device Information : IOPS MiB/s Average min max 00:17:04.070 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8630.33 33.71 3711.21 789.39 8502.09 00:17:04.070 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2665.79 10.41 12099.42 7008.62 23355.21 00:17:04.070 ======================================================== 00:17:04.070 Total : 11296.12 44.13 5690.76 789.39 23355.21 00:17:04.070 00:17:04.070 16:02:57 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:17:04.070 16:02:57 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:06.595 Initializing NVMe Controllers 00:17:06.595 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:06.595 Controller IO queue size 128, less than required. 00:17:06.595 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:06.595 Controller IO queue size 128, less than required. 00:17:06.595 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:06.595 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:06.595 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:06.595 Initialization complete. Launching workers. 00:17:06.595 ======================================================== 00:17:06.595 Latency(us) 00:17:06.595 Device Information : IOPS MiB/s Average min max 00:17:06.595 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1205.39 301.35 107891.12 66298.47 172419.51 00:17:06.595 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 570.00 142.50 231133.14 91559.09 369177.73 00:17:06.595 ======================================================== 00:17:06.595 Total : 1775.39 443.85 147458.84 66298.47 369177.73 00:17:06.595 00:17:06.595 16:03:00 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:17:06.852 Initializing NVMe Controllers 00:17:06.852 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:06.852 Controller IO queue size 128, less than required. 00:17:06.852 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:06.852 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:17:06.852 Controller IO queue size 128, less than required. 00:17:06.852 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:06.852 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:17:06.852 WARNING: Some requested NVMe devices were skipped 00:17:06.852 No valid NVMe controllers or AIO or URING devices found 00:17:06.852 16:03:00 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:17:09.379 Initializing NVMe Controllers 00:17:09.379 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:09.379 Controller IO queue size 128, less than required. 00:17:09.379 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:09.379 Controller IO queue size 128, less than required. 00:17:09.379 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:09.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:09.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:09.379 Initialization complete. Launching workers. 00:17:09.379 00:17:09.379 ==================== 00:17:09.379 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:17:09.379 TCP transport: 00:17:09.379 polls: 10401 00:17:09.379 idle_polls: 6852 00:17:09.379 sock_completions: 3549 00:17:09.379 nvme_completions: 4031 00:17:09.379 submitted_requests: 5888 00:17:09.379 queued_requests: 1 00:17:09.379 00:17:09.379 ==================== 00:17:09.379 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:17:09.379 TCP transport: 00:17:09.379 polls: 7901 00:17:09.379 idle_polls: 4598 00:17:09.379 sock_completions: 3303 00:17:09.379 nvme_completions: 6615 00:17:09.379 submitted_requests: 9982 00:17:09.379 queued_requests: 1 00:17:09.379 ======================================================== 00:17:09.379 Latency(us) 00:17:09.379 Device Information : IOPS MiB/s Average min max 00:17:09.379 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1007.05 251.76 131133.01 79155.38 198768.46 00:17:09.380 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1652.76 413.19 78025.22 42113.73 117607.95 00:17:09.380 ======================================================== 00:17:09.380 Total : 2659.81 664.95 98132.73 42113.73 198768.46 00:17:09.380 00:17:09.380 16:03:02 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:17:09.380 16:03:02 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:09.637 16:03:03 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:17:09.637 16:03:03 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:17:09.637 16:03:03 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:17:09.637 16:03:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:09.637 16:03:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:17:09.637 16:03:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:09.637 16:03:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:17:09.637 16:03:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:09.637 16:03:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:09.637 rmmod nvme_tcp 00:17:09.637 rmmod nvme_fabrics 00:17:09.637 rmmod nvme_keyring 00:17:09.637 16:03:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:09.637 16:03:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:17:09.637 16:03:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:17:09.637 16:03:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 87460 ']' 00:17:09.637 16:03:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 87460 00:17:09.637 16:03:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 87460 ']' 00:17:09.637 16:03:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 87460 00:17:09.637 16:03:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:17:09.637 16:03:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:09.637 16:03:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87460 00:17:09.637 killing process with pid 87460 00:17:09.637 16:03:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:09.637 16:03:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:09.637 16:03:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87460' 00:17:09.637 16:03:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 87460 00:17:09.637 16:03:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 87460 00:17:10.572 16:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:10.572 16:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:10.572 16:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:10.572 16:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:10.572 16:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:10.572 16:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.572 16:03:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:10.572 16:03:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.572 16:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:10.572 ************************************ 00:17:10.572 END TEST nvmf_perf 00:17:10.572 ************************************ 00:17:10.572 00:17:10.572 real 0m14.454s 00:17:10.572 user 0m53.234s 00:17:10.572 sys 0m3.632s 00:17:10.572 16:03:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:10.572 16:03:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:10.572 16:03:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:10.572 16:03:04 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:10.572 16:03:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:10.572 16:03:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:10.572 16:03:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:10.572 ************************************ 00:17:10.572 START TEST nvmf_fio_host 00:17:10.572 ************************************ 00:17:10.572 16:03:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:10.572 * Looking for test storage... 00:17:10.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:10.572 16:03:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:10.572 16:03:04 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.572 16:03:04 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.572 16:03:04 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.572 16:03:04 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.572 16:03:04 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.572 16:03:04 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.572 16:03:04 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:10.572 16:03:04 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.572 16:03:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:10.572 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:17:10.572 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:10.572 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:10.572 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:10.572 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:10.572 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:10.572 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:10.572 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:10.572 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:10.572 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:10.572 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:10.572 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:17:10.572 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:17:10.572 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:10.572 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:10.572 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:10.572 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:10.572 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:10.572 16:03:04 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.572 16:03:04 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.572 16:03:04 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.572 16:03:04 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.572 16:03:04 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.572 16:03:04 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.572 16:03:04 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:10.573 16:03:04 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.573 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:17:10.573 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:10.573 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:10.573 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:10.573 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:10.573 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:10.573 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:10.573 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:10.573 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:10.573 16:03:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:10.573 16:03:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:17:10.573 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:10.573 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:10.573 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:10.573 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:10.573 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:10.573 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.573 16:03:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:10.573 16:03:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.573 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:10.573 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:10.831 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:10.831 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:10.832 Cannot find device "nvmf_tgt_br" 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:10.832 Cannot find device "nvmf_tgt_br2" 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:10.832 Cannot find device "nvmf_tgt_br" 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:10.832 Cannot find device "nvmf_tgt_br2" 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:10.832 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:10.832 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:10.832 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:11.090 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:11.090 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:11.090 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:11.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:11.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:17:11.090 00:17:11.090 --- 10.0.0.2 ping statistics --- 00:17:11.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.090 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:17:11.090 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:11.090 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:11.090 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:17:11.090 00:17:11.090 --- 10.0.0.3 ping statistics --- 00:17:11.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.090 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:17:11.090 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:11.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:11.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:17:11.090 00:17:11.090 --- 10.0.0.1 ping statistics --- 00:17:11.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.090 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:17:11.090 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:11.090 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:17:11.090 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:11.090 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:11.090 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:11.090 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:11.090 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:11.090 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:11.090 16:03:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:11.090 16:03:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:17:11.090 16:03:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:17:11.090 16:03:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:11.090 16:03:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.090 16:03:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=87935 00:17:11.090 16:03:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:11.090 16:03:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 87935 00:17:11.090 16:03:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:11.090 16:03:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 87935 ']' 00:17:11.090 16:03:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.090 16:03:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:11.091 16:03:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.091 16:03:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:11.091 16:03:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.091 [2024-07-15 16:03:04.669500] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:17:11.091 [2024-07-15 16:03:04.669593] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:11.091 [2024-07-15 16:03:04.808821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:11.349 [2024-07-15 16:03:04.928894] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:11.349 [2024-07-15 16:03:04.929186] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:11.349 [2024-07-15 16:03:04.929440] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:11.349 [2024-07-15 16:03:04.929627] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:11.349 [2024-07-15 16:03:04.929692] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:11.349 [2024-07-15 16:03:04.929945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.349 [2024-07-15 16:03:04.930609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:11.349 [2024-07-15 16:03:04.931158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:11.349 [2024-07-15 16:03:04.931162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.282 16:03:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:12.282 16:03:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:17:12.282 16:03:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:12.541 [2024-07-15 16:03:06.052694] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:12.541 16:03:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:17:12.541 16:03:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:12.541 16:03:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.541 16:03:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:12.799 Malloc1 00:17:12.799 16:03:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:13.057 16:03:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:13.315 16:03:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:13.573 [2024-07-15 16:03:07.197947] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:13.573 16:03:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:13.831 16:03:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:17:13.831 16:03:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:13.831 16:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:13.831 16:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:13.831 16:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:13.831 16:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:13.831 16:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:13.831 16:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:17:13.831 16:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:13.831 16:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:13.831 16:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:13.831 16:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:17:13.831 16:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:13.831 16:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:17:13.831 16:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:17:13.831 16:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:13.831 16:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:13.831 16:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:17:13.831 16:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:13.831 16:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:17:13.831 16:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:17:13.831 16:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:13.831 16:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:14.089 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:17:14.089 fio-3.35 00:17:14.089 Starting 1 thread 00:17:16.616 00:17:16.616 test: (groupid=0, jobs=1): err= 0: pid=88066: Mon Jul 15 16:03:09 2024 00:17:16.616 read: IOPS=8955, BW=35.0MiB/s (36.7MB/s)(70.2MiB/2006msec) 00:17:16.616 slat (usec): min=2, max=368, avg= 2.70, stdev= 3.80 00:17:16.616 clat (usec): min=3130, max=13913, avg=7445.49, stdev=604.09 00:17:16.616 lat (usec): min=3184, max=13915, avg=7448.18, stdev=603.81 00:17:16.616 clat percentiles (usec): 00:17:16.616 | 1.00th=[ 6259], 5.00th=[ 6652], 10.00th=[ 6849], 20.00th=[ 7046], 00:17:16.616 | 30.00th=[ 7177], 40.00th=[ 7308], 50.00th=[ 7439], 60.00th=[ 7504], 00:17:16.616 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8029], 95.00th=[ 8225], 00:17:16.616 | 99.00th=[ 8848], 99.50th=[10945], 99.90th=[13042], 99.95th=[13042], 00:17:16.616 | 99.99th=[13829] 00:17:16.616 bw ( KiB/s): min=34616, max=36528, per=99.95%, avg=35804.00, stdev=833.14, samples=4 00:17:16.616 iops : min= 8654, max= 9132, avg=8951.00, stdev=208.29, samples=4 00:17:16.616 write: IOPS=8977, BW=35.1MiB/s (36.8MB/s)(70.3MiB/2006msec); 0 zone resets 00:17:16.616 slat (usec): min=2, max=275, avg= 2.80, stdev= 2.49 00:17:16.616 clat (usec): min=2658, max=12437, avg=6762.27, stdev=528.88 00:17:16.616 lat (usec): min=2672, max=12439, avg=6765.06, stdev=528.73 00:17:16.616 clat percentiles (usec): 00:17:16.616 | 1.00th=[ 5669], 5.00th=[ 6128], 10.00th=[ 6259], 20.00th=[ 6456], 00:17:16.616 | 30.00th=[ 6587], 40.00th=[ 6652], 50.00th=[ 6783], 60.00th=[ 6849], 00:17:16.616 | 70.00th=[ 6980], 80.00th=[ 7046], 90.00th=[ 7242], 95.00th=[ 7373], 00:17:16.616 | 99.00th=[ 7832], 99.50th=[ 9241], 99.90th=[11600], 99.95th=[11731], 00:17:16.616 | 99.99th=[12387] 00:17:16.616 bw ( KiB/s): min=35392, max=36520, per=99.94%, avg=35890.00, stdev=541.42, samples=4 00:17:16.616 iops : min= 8848, max= 9130, avg=8972.50, stdev=135.36, samples=4 00:17:16.616 lat (msec) : 4=0.15%, 10=99.35%, 20=0.50% 00:17:16.616 cpu : usr=68.18%, sys=22.84%, ctx=42, majf=0, minf=7 00:17:16.616 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:16.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.616 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:16.616 issued rwts: total=17965,18009,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.616 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:16.616 00:17:16.616 Run status group 0 (all jobs): 00:17:16.616 READ: bw=35.0MiB/s (36.7MB/s), 35.0MiB/s-35.0MiB/s (36.7MB/s-36.7MB/s), io=70.2MiB (73.6MB), run=2006-2006msec 00:17:16.616 WRITE: bw=35.1MiB/s (36.8MB/s), 35.1MiB/s-35.1MiB/s (36.8MB/s-36.8MB/s), io=70.3MiB (73.8MB), run=2006-2006msec 00:17:16.616 16:03:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:16.616 16:03:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:16.616 16:03:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:16.616 16:03:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:16.616 16:03:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:16.616 16:03:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:16.616 16:03:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:17:16.616 16:03:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:16.616 16:03:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:16.616 16:03:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:16.616 16:03:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:17:16.616 16:03:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:16.616 16:03:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:17:16.616 16:03:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:17:16.616 16:03:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:16.616 16:03:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:16.616 16:03:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:17:16.616 16:03:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:16.616 16:03:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:17:16.616 16:03:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:17:16.616 16:03:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:16.616 16:03:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:16.616 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:17:16.616 fio-3.35 00:17:16.616 Starting 1 thread 00:17:19.144 00:17:19.144 test: (groupid=0, jobs=1): err= 0: pid=88115: Mon Jul 15 16:03:12 2024 00:17:19.144 read: IOPS=7940, BW=124MiB/s (130MB/s)(249MiB/2005msec) 00:17:19.144 slat (usec): min=3, max=131, avg= 3.98, stdev= 2.10 00:17:19.144 clat (usec): min=2706, max=19188, avg=9548.75, stdev=2333.90 00:17:19.144 lat (usec): min=2710, max=19193, avg=9552.73, stdev=2334.03 00:17:19.144 clat percentiles (usec): 00:17:19.144 | 1.00th=[ 4948], 5.00th=[ 5866], 10.00th=[ 6456], 20.00th=[ 7308], 00:17:19.144 | 30.00th=[ 8094], 40.00th=[ 8848], 50.00th=[ 9634], 60.00th=[10421], 00:17:19.144 | 70.00th=[11076], 80.00th=[11600], 90.00th=[12256], 95.00th=[13042], 00:17:19.144 | 99.00th=[15139], 99.50th=[15926], 99.90th=[16909], 99.95th=[18220], 00:17:19.144 | 99.99th=[19006] 00:17:19.144 bw ( KiB/s): min=57600, max=68992, per=51.03%, avg=64840.00, stdev=5145.96, samples=4 00:17:19.144 iops : min= 3600, max= 4312, avg=4052.50, stdev=321.62, samples=4 00:17:19.144 write: IOPS=4792, BW=74.9MiB/s (78.5MB/s)(133MiB/1777msec); 0 zone resets 00:17:19.144 slat (usec): min=33, max=848, avg=39.87, stdev=11.66 00:17:19.144 clat (usec): min=3795, max=19892, avg=11658.22, stdev=2166.34 00:17:19.144 lat (usec): min=3832, max=19928, avg=11698.10, stdev=2167.61 00:17:19.144 clat percentiles (usec): 00:17:19.144 | 1.00th=[ 7701], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[ 9765], 00:17:19.144 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11338], 60.00th=[11863], 00:17:19.144 | 70.00th=[12649], 80.00th=[13435], 90.00th=[14615], 95.00th=[15533], 00:17:19.144 | 99.00th=[17695], 99.50th=[18482], 99.90th=[19530], 99.95th=[19792], 00:17:19.144 | 99.99th=[19792] 00:17:19.144 bw ( KiB/s): min=59552, max=71808, per=88.22%, avg=67648.00, stdev=5683.46, samples=4 00:17:19.144 iops : min= 3722, max= 4488, avg=4228.00, stdev=355.22, samples=4 00:17:19.144 lat (msec) : 4=0.15%, 10=43.64%, 20=56.21% 00:17:19.144 cpu : usr=71.06%, sys=18.71%, ctx=6, majf=0, minf=20 00:17:19.144 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:17:19.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:19.144 issued rwts: total=15921,8516,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.144 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:19.144 00:17:19.144 Run status group 0 (all jobs): 00:17:19.144 READ: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=249MiB (261MB), run=2005-2005msec 00:17:19.144 WRITE: bw=74.9MiB/s (78.5MB/s), 74.9MiB/s-74.9MiB/s (78.5MB/s-78.5MB/s), io=133MiB (140MB), run=1777-1777msec 00:17:19.144 16:03:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:19.144 16:03:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:17:19.144 16:03:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:17:19.144 16:03:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:17:19.144 16:03:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:17:19.144 16:03:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:19.144 16:03:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:17:19.144 16:03:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:19.144 16:03:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:17:19.144 16:03:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:19.144 16:03:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:19.144 rmmod nvme_tcp 00:17:19.144 rmmod nvme_fabrics 00:17:19.144 rmmod nvme_keyring 00:17:19.144 16:03:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:19.144 16:03:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:17:19.144 16:03:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:17:19.144 16:03:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 87935 ']' 00:17:19.144 16:03:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 87935 00:17:19.144 16:03:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 87935 ']' 00:17:19.144 16:03:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 87935 00:17:19.144 16:03:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:17:19.144 16:03:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:19.144 16:03:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87935 00:17:19.402 killing process with pid 87935 00:17:19.402 16:03:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:19.402 16:03:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:19.402 16:03:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87935' 00:17:19.402 16:03:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 87935 00:17:19.402 16:03:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 87935 00:17:19.660 16:03:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:19.660 16:03:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:19.660 16:03:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:19.660 16:03:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:19.660 16:03:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:19.660 16:03:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.660 16:03:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:19.660 16:03:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.660 16:03:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:19.660 ************************************ 00:17:19.660 END TEST nvmf_fio_host 00:17:19.660 ************************************ 00:17:19.660 00:17:19.660 real 0m8.985s 00:17:19.660 user 0m37.038s 00:17:19.660 sys 0m2.331s 00:17:19.660 16:03:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:19.660 16:03:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.660 16:03:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:19.660 16:03:13 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:19.660 16:03:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:19.660 16:03:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:19.660 16:03:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:19.660 ************************************ 00:17:19.660 START TEST nvmf_failover 00:17:19.660 ************************************ 00:17:19.660 16:03:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:19.660 * Looking for test storage... 00:17:19.660 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:19.660 16:03:13 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:19.660 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:17:19.660 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:19.660 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:19.660 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:19.660 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:19.660 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:19.660 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:19.660 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:19.660 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:19.660 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:19.660 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:19.660 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:17:19.660 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:17:19.660 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:19.660 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:19.660 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:19.660 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:19.660 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:19.660 16:03:13 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:19.660 16:03:13 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:19.660 16:03:13 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:19.660 16:03:13 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:19.661 Cannot find device "nvmf_tgt_br" 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:19.661 Cannot find device "nvmf_tgt_br2" 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:17:19.661 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:19.919 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:19.919 Cannot find device "nvmf_tgt_br" 00:17:19.919 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:17:19.919 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:19.919 Cannot find device "nvmf_tgt_br2" 00:17:19.919 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:17:19.919 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:19.919 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:19.919 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:19.919 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:19.919 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:17:19.919 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:19.919 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:19.919 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:17:19.919 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:19.919 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:19.919 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:19.919 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:19.919 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:19.919 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:19.919 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:19.919 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:19.919 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:19.919 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:19.919 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:19.919 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:19.919 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:19.919 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:19.919 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:19.919 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:19.919 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:19.919 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:19.919 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:19.919 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:19.919 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:20.178 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:20.178 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:20.178 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:20.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:20.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:17:20.178 00:17:20.178 --- 10.0.0.2 ping statistics --- 00:17:20.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.178 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:17:20.178 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:20.178 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:20.178 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:17:20.178 00:17:20.178 --- 10.0.0.3 ping statistics --- 00:17:20.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.178 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:17:20.178 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:20.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:20.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:17:20.178 00:17:20.178 --- 10.0.0.1 ping statistics --- 00:17:20.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.178 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:17:20.178 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:20.178 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:17:20.178 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:20.178 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:20.178 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:20.178 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:20.178 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:20.178 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:20.178 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:20.178 16:03:13 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:17:20.178 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:20.178 16:03:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:20.178 16:03:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:20.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.178 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=88329 00:17:20.178 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 88329 00:17:20.178 16:03:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 88329 ']' 00:17:20.178 16:03:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:20.178 16:03:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.178 16:03:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:20.178 16:03:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.178 16:03:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:20.178 16:03:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:20.178 [2024-07-15 16:03:13.772020] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:17:20.178 [2024-07-15 16:03:13.772102] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.436 [2024-07-15 16:03:13.914217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:20.436 [2024-07-15 16:03:14.024654] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.436 [2024-07-15 16:03:14.025024] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.436 [2024-07-15 16:03:14.025062] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:20.436 [2024-07-15 16:03:14.025074] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:20.436 [2024-07-15 16:03:14.025085] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.436 [2024-07-15 16:03:14.025578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.436 [2024-07-15 16:03:14.025760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:20.436 [2024-07-15 16:03:14.025771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:21.370 16:03:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:21.370 16:03:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:17:21.370 16:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:21.370 16:03:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:21.370 16:03:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:21.370 16:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:21.370 16:03:14 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:21.370 [2024-07-15 16:03:15.029774] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:21.370 16:03:15 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:21.628 Malloc0 00:17:21.628 16:03:15 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:21.886 16:03:15 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:22.143 16:03:15 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:22.401 [2024-07-15 16:03:16.028064] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:22.401 16:03:16 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:22.658 [2024-07-15 16:03:16.256234] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:22.658 16:03:16 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:22.914 [2024-07-15 16:03:16.488409] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:17:22.914 16:03:16 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:17:22.914 16:03:16 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=88442 00:17:22.914 16:03:16 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:22.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:22.914 16:03:16 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 88442 /var/tmp/bdevperf.sock 00:17:22.914 16:03:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 88442 ']' 00:17:22.914 16:03:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:22.914 16:03:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:22.914 16:03:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:22.914 16:03:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:22.914 16:03:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:23.865 16:03:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:23.865 16:03:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:17:23.865 16:03:17 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:24.430 NVMe0n1 00:17:24.430 16:03:17 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:24.687 00:17:24.687 16:03:18 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=88484 00:17:24.687 16:03:18 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:24.687 16:03:18 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:17:25.620 16:03:19 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:25.877 16:03:19 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:17:29.159 16:03:22 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:29.159 00:17:29.159 16:03:22 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:29.418 [2024-07-15 16:03:23.059832] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.418 [2024-07-15 16:03:23.059893] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.418 [2024-07-15 16:03:23.059905] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.418 [2024-07-15 16:03:23.059914] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.418 [2024-07-15 16:03:23.059922] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.418 [2024-07-15 16:03:23.059930] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.418 [2024-07-15 16:03:23.059938] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.418 [2024-07-15 16:03:23.059946] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.418 [2024-07-15 16:03:23.059954] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.418 [2024-07-15 16:03:23.059962] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.418 [2024-07-15 16:03:23.059999] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.418 [2024-07-15 16:03:23.060010] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.418 [2024-07-15 16:03:23.060019] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.418 [2024-07-15 16:03:23.060038] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.418 [2024-07-15 16:03:23.060046] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.418 [2024-07-15 16:03:23.060054] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.418 [2024-07-15 16:03:23.060063] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.418 [2024-07-15 16:03:23.060071] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.418 [2024-07-15 16:03:23.060080] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060089] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060097] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060106] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060114] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060122] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060130] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060138] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060147] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060155] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060165] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060173] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060182] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060190] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060199] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060208] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060217] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060225] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060233] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060241] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060250] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060258] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060266] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060274] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060282] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060290] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060298] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060306] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060314] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060322] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060330] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060355] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060364] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060372] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060380] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060388] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060396] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060405] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060413] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060421] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060429] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060438] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060446] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060454] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060462] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060471] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060479] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060487] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060495] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060503] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060511] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060519] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060527] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060535] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060543] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060551] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060559] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060567] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060574] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060582] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060589] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060597] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060605] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060613] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060622] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060631] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060639] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060647] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060655] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060663] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060671] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060679] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060687] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060695] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060702] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060711] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060719] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060727] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060735] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060742] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060750] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060759] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060767] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060775] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060783] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060791] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060799] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060807] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060814] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060822] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.419 [2024-07-15 16:03:23.060830] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.420 [2024-07-15 16:03:23.060839] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.420 [2024-07-15 16:03:23.060847] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.420 [2024-07-15 16:03:23.060855] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.420 [2024-07-15 16:03:23.060863] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.420 [2024-07-15 16:03:23.060871] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.420 [2024-07-15 16:03:23.060879] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.420 [2024-07-15 16:03:23.060887] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.420 [2024-07-15 16:03:23.060895] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.420 [2024-07-15 16:03:23.060903] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.420 [2024-07-15 16:03:23.060911] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.420 [2024-07-15 16:03:23.060919] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.420 [2024-07-15 16:03:23.060927] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.420 [2024-07-15 16:03:23.060934] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.420 [2024-07-15 16:03:23.060942] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.420 [2024-07-15 16:03:23.060950] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.420 [2024-07-15 16:03:23.060958] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.420 [2024-07-15 16:03:23.060983] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb8e20 is same with the state(5) to be set 00:17:29.420 16:03:23 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:17:32.765 16:03:26 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:32.765 [2024-07-15 16:03:26.313878] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:32.765 16:03:26 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:17:33.698 16:03:27 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:33.957 [2024-07-15 16:03:27.568675] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.957 [2024-07-15 16:03:27.568724] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.957 [2024-07-15 16:03:27.568736] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.957 [2024-07-15 16:03:27.568745] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.957 [2024-07-15 16:03:27.568754] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.957 [2024-07-15 16:03:27.568763] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.957 [2024-07-15 16:03:27.568771] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.957 [2024-07-15 16:03:27.568780] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.957 [2024-07-15 16:03:27.568788] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.568808] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.568817] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.568825] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.568834] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.568842] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.568850] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.568859] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.568867] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.568875] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.568885] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.568894] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.568902] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.568910] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.568918] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.568927] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.568935] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.568943] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.568951] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.568986] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.568995] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569003] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569012] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569020] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569028] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569036] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569045] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569053] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569061] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569069] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569077] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569085] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569093] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569102] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569110] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569118] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569126] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569134] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569142] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569150] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569157] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569166] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569184] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569193] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569201] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569209] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569217] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569225] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569233] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569241] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569249] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569257] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569265] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569272] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569280] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569288] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569296] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569304] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569312] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569321] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569331] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569339] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569347] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569355] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569363] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569371] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569380] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569387] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569395] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569403] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569411] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569419] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569427] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569436] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569445] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569453] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.958 [2024-07-15 16:03:27.569461] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569469] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569477] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569486] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569494] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569502] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569510] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569518] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569526] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569534] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569542] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569549] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569557] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569565] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569573] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569581] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569590] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569598] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569605] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569613] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569621] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569629] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569637] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569645] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569654] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569663] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569671] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569679] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569687] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569695] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569704] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569713] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569721] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569729] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569737] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569745] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569754] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569762] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569769] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569777] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569785] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569793] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569812] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 [2024-07-15 16:03:27.569820] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb99a0 is same with the state(5) to be set 00:17:33.959 16:03:27 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 88484 00:17:40.523 0 00:17:40.523 16:03:33 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 88442 00:17:40.523 16:03:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 88442 ']' 00:17:40.523 16:03:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 88442 00:17:40.523 16:03:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:17:40.523 16:03:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:40.523 16:03:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88442 00:17:40.523 killing process with pid 88442 00:17:40.523 16:03:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:40.523 16:03:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:40.523 16:03:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88442' 00:17:40.523 16:03:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 88442 00:17:40.523 16:03:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 88442 00:17:40.523 16:03:33 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:40.523 [2024-07-15 16:03:16.565766] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:17:40.523 [2024-07-15 16:03:16.565906] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88442 ] 00:17:40.523 [2024-07-15 16:03:16.705395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.523 [2024-07-15 16:03:16.818115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.523 Running I/O for 15 seconds... 00:17:40.523 [2024-07-15 16:03:19.452925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:81832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.523 [2024-07-15 16:03:19.453020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.523 [2024-07-15 16:03:19.453050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.523 [2024-07-15 16:03:19.453072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.523 [2024-07-15 16:03:19.453095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.523 [2024-07-15 16:03:19.453108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.523 [2024-07-15 16:03:19.453124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.523 [2024-07-15 16:03:19.453138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.523 [2024-07-15 16:03:19.453154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.523 [2024-07-15 16:03:19.453167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.523 [2024-07-15 16:03:19.453182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.523 [2024-07-15 16:03:19.453196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.523 [2024-07-15 16:03:19.453212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.524 [2024-07-15 16:03:19.453225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.524 [2024-07-15 16:03:19.453241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.524 [2024-07-15 16:03:19.453254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.524 [2024-07-15 16:03:19.453270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.524 [2024-07-15 16:03:19.453283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.524 [2024-07-15 16:03:19.453299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.524 [2024-07-15 16:03:19.453312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.524 [2024-07-15 16:03:19.453328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.524 [2024-07-15 16:03:19.453341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.524 [2024-07-15 16:03:19.453386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.524 [2024-07-15 16:03:19.453401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.524 [2024-07-15 16:03:19.453417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.524 [2024-07-15 16:03:19.453430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.524 [2024-07-15 16:03:19.453446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.524 [2024-07-15 16:03:19.453459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.524 [2024-07-15 16:03:19.453474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.524 [2024-07-15 16:03:19.453487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.524 [2024-07-15 16:03:19.453502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.524 [2024-07-15 16:03:19.453516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.524 [2024-07-15 16:03:19.453531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.524 [2024-07-15 16:03:19.453545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.524 [2024-07-15 16:03:19.453568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.524 [2024-07-15 16:03:19.453582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.524 [2024-07-15 16:03:19.453597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.524 [2024-07-15 16:03:19.453611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.524 [2024-07-15 16:03:19.453626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.524 [2024-07-15 16:03:19.453640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.524 [2024-07-15 16:03:19.453655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.524 [2024-07-15 16:03:19.453668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.524 [2024-07-15 16:03:19.453684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.524 [2024-07-15 16:03:19.453697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.524 [2024-07-15 16:03:19.453713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:81840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.524 [2024-07-15 16:03:19.453727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.524 [2024-07-15 16:03:19.453744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.524 [2024-07-15 16:03:19.453765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.524 [2024-07-15 16:03:19.453781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:81856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.524 [2024-07-15 16:03:19.453795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.524 [2024-07-15 16:03:19.453810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.524 [2024-07-15 16:03:19.453824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.524 [2024-07-15 16:03:19.453840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.524 [2024-07-15 16:03:19.453865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.524 [2024-07-15 16:03:19.453880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.524 [2024-07-15 16:03:19.453894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.524 [2024-07-15 16:03:19.453909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.524 [2024-07-15 16:03:19.453923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.524 [2024-07-15 16:03:19.453939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.524 [2024-07-15 16:03:19.453952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.524 [2024-07-15 16:03:19.453979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.524 [2024-07-15 16:03:19.453993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.524 [2024-07-15 16:03:19.454009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.524 [2024-07-15 16:03:19.454022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.524 [2024-07-15 16:03:19.454038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.524 [2024-07-15 16:03:19.454051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.524 [2024-07-15 16:03:19.454072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.524 [2024-07-15 16:03:19.454087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.524 [2024-07-15 16:03:19.454102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.524 [2024-07-15 16:03:19.454116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.524 [2024-07-15 16:03:19.454131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.524 [2024-07-15 16:03:19.454145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.524 [2024-07-15 16:03:19.454167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.524 [2024-07-15 16:03:19.454192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.524 [2024-07-15 16:03:19.454215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.524 [2024-07-15 16:03:19.454228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.524 [2024-07-15 16:03:19.454243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.524 [2024-07-15 16:03:19.454258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.524 [2024-07-15 16:03:19.454273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.524 [2024-07-15 16:03:19.454287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.524 [2024-07-15 16:03:19.454302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.524 [2024-07-15 16:03:19.454316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.524 [2024-07-15 16:03:19.454331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.524 [2024-07-15 16:03:19.454345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.524 [2024-07-15 16:03:19.454361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.524 [2024-07-15 16:03:19.454375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.524 [2024-07-15 16:03:19.454390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.524 [2024-07-15 16:03:19.454404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.524 [2024-07-15 16:03:19.454419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.524 [2024-07-15 16:03:19.454432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.454448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.525 [2024-07-15 16:03:19.454461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.454477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.525 [2024-07-15 16:03:19.454490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.454505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.525 [2024-07-15 16:03:19.454519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.454534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.525 [2024-07-15 16:03:19.454548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.454574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.525 [2024-07-15 16:03:19.454588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.454604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.525 [2024-07-15 16:03:19.454617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.454634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.525 [2024-07-15 16:03:19.454647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.454663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:81920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.525 [2024-07-15 16:03:19.454678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.454693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:81928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.525 [2024-07-15 16:03:19.454707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.454722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.525 [2024-07-15 16:03:19.454736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.454751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:81944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.525 [2024-07-15 16:03:19.454764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.454780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.525 [2024-07-15 16:03:19.454793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.454809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:81960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.525 [2024-07-15 16:03:19.454822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.454838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.525 [2024-07-15 16:03:19.454851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.454867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.525 [2024-07-15 16:03:19.454880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.454902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.525 [2024-07-15 16:03:19.454916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.454940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.525 [2024-07-15 16:03:19.454972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.454990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.525 [2024-07-15 16:03:19.455004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.455019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.525 [2024-07-15 16:03:19.455033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.455048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.525 [2024-07-15 16:03:19.455061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.455081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.525 [2024-07-15 16:03:19.455095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.455110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.525 [2024-07-15 16:03:19.455124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.455140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.525 [2024-07-15 16:03:19.455153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.455168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.525 [2024-07-15 16:03:19.455182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.455197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.525 [2024-07-15 16:03:19.455210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.455226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.525 [2024-07-15 16:03:19.455239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.455255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.525 [2024-07-15 16:03:19.455268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.455284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.525 [2024-07-15 16:03:19.455297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.455312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.525 [2024-07-15 16:03:19.455325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.455347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.525 [2024-07-15 16:03:19.455361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.455376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.525 [2024-07-15 16:03:19.455390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.455410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.525 [2024-07-15 16:03:19.455423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.455439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.525 [2024-07-15 16:03:19.455452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.455468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.525 [2024-07-15 16:03:19.455482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.455504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.525 [2024-07-15 16:03:19.455518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.455533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.525 [2024-07-15 16:03:19.455546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.455566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.525 [2024-07-15 16:03:19.455580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.455596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.525 [2024-07-15 16:03:19.455609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.455624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.525 [2024-07-15 16:03:19.455638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.455653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.525 [2024-07-15 16:03:19.455667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.525 [2024-07-15 16:03:19.455682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.526 [2024-07-15 16:03:19.455695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.526 [2024-07-15 16:03:19.455710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.526 [2024-07-15 16:03:19.455730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.526 [2024-07-15 16:03:19.455746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.526 [2024-07-15 16:03:19.455760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.526 [2024-07-15 16:03:19.455776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.526 [2024-07-15 16:03:19.455789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.526 [2024-07-15 16:03:19.455804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.526 [2024-07-15 16:03:19.455817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.526 [2024-07-15 16:03:19.455833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.526 [2024-07-15 16:03:19.455846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.526 [2024-07-15 16:03:19.455861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.526 [2024-07-15 16:03:19.455875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.526 [2024-07-15 16:03:19.455895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.526 [2024-07-15 16:03:19.455908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.526 [2024-07-15 16:03:19.455924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.526 [2024-07-15 16:03:19.455937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.526 [2024-07-15 16:03:19.455952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:82024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.526 [2024-07-15 16:03:19.455978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.526 [2024-07-15 16:03:19.455994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.526 [2024-07-15 16:03:19.456007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.526 [2024-07-15 16:03:19.456023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.526 [2024-07-15 16:03:19.456036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.526 [2024-07-15 16:03:19.456057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.526 [2024-07-15 16:03:19.456071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.526 [2024-07-15 16:03:19.456086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.526 [2024-07-15 16:03:19.456100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.526 [2024-07-15 16:03:19.456115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.526 [2024-07-15 16:03:19.456138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.526 [2024-07-15 16:03:19.456154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.526 [2024-07-15 16:03:19.456168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.526 [2024-07-15 16:03:19.456183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.526 [2024-07-15 16:03:19.456197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.526 [2024-07-15 16:03:19.456212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.526 [2024-07-15 16:03:19.456225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.526 [2024-07-15 16:03:19.456241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.526 [2024-07-15 16:03:19.456254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.526 [2024-07-15 16:03:19.456269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.526 [2024-07-15 16:03:19.456282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.526 [2024-07-15 16:03:19.456298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.526 [2024-07-15 16:03:19.456311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.526 [2024-07-15 16:03:19.456327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.526 [2024-07-15 16:03:19.456340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.526 [2024-07-15 16:03:19.456355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.526 [2024-07-15 16:03:19.456368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.526 [2024-07-15 16:03:19.456384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.526 [2024-07-15 16:03:19.456398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.526 [2024-07-15 16:03:19.456413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.526 [2024-07-15 16:03:19.456426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.526 [2024-07-15 16:03:19.456441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.526 [2024-07-15 16:03:19.456455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.526 [2024-07-15 16:03:19.456470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.526 [2024-07-15 16:03:19.456484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.526 [2024-07-15 16:03:19.456506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.526 [2024-07-15 16:03:19.456520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.526 [2024-07-15 16:03:19.456541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.526 [2024-07-15 16:03:19.456555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.526 [2024-07-15 16:03:19.456571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.526 [2024-07-15 16:03:19.456584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.526 [2024-07-15 16:03:19.456627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:40.526 [2024-07-15 16:03:19.456643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82816 len:8 PRP1 0x0 PRP2 0x0 00:17:40.526 [2024-07-15 16:03:19.456656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.526 [2024-07-15 16:03:19.456674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:40.526 [2024-07-15 16:03:19.456684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:40.526 [2024-07-15 16:03:19.456695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82824 len:8 PRP1 0x0 PRP2 0x0 00:17:40.526 [2024-07-15 16:03:19.456709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.526 [2024-07-15 16:03:19.456722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:40.526 [2024-07-15 16:03:19.456733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:40.526 [2024-07-15 16:03:19.456743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82832 len:8 PRP1 0x0 PRP2 0x0 00:17:40.526 [2024-07-15 16:03:19.456756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.526 [2024-07-15 16:03:19.456770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:40.526 [2024-07-15 16:03:19.456780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:40.526 [2024-07-15 16:03:19.456790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82840 len:8 PRP1 0x0 PRP2 0x0 00:17:40.526 [2024-07-15 16:03:19.456803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.526 [2024-07-15 16:03:19.456817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:40.526 [2024-07-15 16:03:19.456827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:40.526 [2024-07-15 16:03:19.456838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82848 len:8 PRP1 0x0 PRP2 0x0 00:17:40.526 [2024-07-15 16:03:19.456851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.527 [2024-07-15 16:03:19.456865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:40.527 [2024-07-15 16:03:19.456875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:40.527 [2024-07-15 16:03:19.456886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82064 len:8 PRP1 0x0 PRP2 0x0 00:17:40.527 [2024-07-15 16:03:19.456899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.527 [2024-07-15 16:03:19.456920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:40.527 [2024-07-15 16:03:19.456931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:40.527 [2024-07-15 16:03:19.456941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82072 len:8 PRP1 0x0 PRP2 0x0 00:17:40.527 [2024-07-15 16:03:19.456965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.527 [2024-07-15 16:03:19.456981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:40.527 [2024-07-15 16:03:19.456996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:40.527 [2024-07-15 16:03:19.457007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82080 len:8 PRP1 0x0 PRP2 0x0 00:17:40.527 [2024-07-15 16:03:19.457020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.527 [2024-07-15 16:03:19.457034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:40.527 [2024-07-15 16:03:19.457044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:40.527 [2024-07-15 16:03:19.457055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82088 len:8 PRP1 0x0 PRP2 0x0 00:17:40.527 [2024-07-15 16:03:19.457068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.527 [2024-07-15 16:03:19.457081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:40.527 [2024-07-15 16:03:19.457091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:40.527 [2024-07-15 16:03:19.457101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82096 len:8 PRP1 0x0 PRP2 0x0 00:17:40.527 [2024-07-15 16:03:19.457118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.527 [2024-07-15 16:03:19.457132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:40.527 [2024-07-15 16:03:19.457141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:40.527 [2024-07-15 16:03:19.457152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82104 len:8 PRP1 0x0 PRP2 0x0 00:17:40.527 [2024-07-15 16:03:19.457166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.527 [2024-07-15 16:03:19.457179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:40.527 [2024-07-15 16:03:19.457189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:40.527 [2024-07-15 16:03:19.457199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82112 len:8 PRP1 0x0 PRP2 0x0 00:17:40.527 [2024-07-15 16:03:19.457213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.527 [2024-07-15 16:03:19.457226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:40.527 [2024-07-15 16:03:19.457236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:40.527 [2024-07-15 16:03:19.457246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82120 len:8 PRP1 0x0 PRP2 0x0 00:17:40.527 [2024-07-15 16:03:19.457260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.527 [2024-07-15 16:03:19.457316] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1649c90 was disconnected and freed. reset controller. 00:17:40.527 [2024-07-15 16:03:19.457335] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:17:40.527 [2024-07-15 16:03:19.457391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:40.527 [2024-07-15 16:03:19.457420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.527 [2024-07-15 16:03:19.457436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:40.527 [2024-07-15 16:03:19.457450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.527 [2024-07-15 16:03:19.457463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:40.527 [2024-07-15 16:03:19.457477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.527 [2024-07-15 16:03:19.457491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:40.527 [2024-07-15 16:03:19.457509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.527 [2024-07-15 16:03:19.457523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:40.527 [2024-07-15 16:03:19.457557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15cde30 (9): Bad file descriptor 00:17:40.527 [2024-07-15 16:03:19.461353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:40.527 [2024-07-15 16:03:19.499259] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:40.527 [2024-07-15 16:03:23.061309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.527 [2024-07-15 16:03:23.061352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.527 [2024-07-15 16:03:23.061378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:80408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.527 [2024-07-15 16:03:23.061394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.527 [2024-07-15 16:03:23.061410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.527 [2024-07-15 16:03:23.061423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.527 [2024-07-15 16:03:23.061438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.527 [2024-07-15 16:03:23.061451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.527 [2024-07-15 16:03:23.061466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.527 [2024-07-15 16:03:23.061480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.527 [2024-07-15 16:03:23.061495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.527 [2024-07-15 16:03:23.061508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.527 [2024-07-15 16:03:23.061522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.527 [2024-07-15 16:03:23.061535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.527 [2024-07-15 16:03:23.061550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.527 [2024-07-15 16:03:23.061587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.527 [2024-07-15 16:03:23.061604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.527 [2024-07-15 16:03:23.061616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.527 [2024-07-15 16:03:23.061631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.527 [2024-07-15 16:03:23.061644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.527 [2024-07-15 16:03:23.061659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.527 [2024-07-15 16:03:23.061672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.527 [2024-07-15 16:03:23.061686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:80488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.527 [2024-07-15 16:03:23.061699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.527 [2024-07-15 16:03:23.061714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.527 [2024-07-15 16:03:23.061727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.527 [2024-07-15 16:03:23.061742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.527 [2024-07-15 16:03:23.061755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.527 [2024-07-15 16:03:23.061770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.527 [2024-07-15 16:03:23.061783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.527 [2024-07-15 16:03:23.061798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.527 [2024-07-15 16:03:23.061811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.527 [2024-07-15 16:03:23.061825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:80528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.527 [2024-07-15 16:03:23.061839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.527 [2024-07-15 16:03:23.061885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.527 [2024-07-15 16:03:23.061899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.527 [2024-07-15 16:03:23.061915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.527 [2024-07-15 16:03:23.061928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.527 [2024-07-15 16:03:23.061944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.527 [2024-07-15 16:03:23.061957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.527 [2024-07-15 16:03:23.061994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.527 [2024-07-15 16:03:23.062010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.528 [2024-07-15 16:03:23.062026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.528 [2024-07-15 16:03:23.062040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.528 [2024-07-15 16:03:23.062065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.528 [2024-07-15 16:03:23.062078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.528 [2024-07-15 16:03:23.062093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.528 [2024-07-15 16:03:23.062106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.528 [2024-07-15 16:03:23.062122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.528 [2024-07-15 16:03:23.062135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.528 [2024-07-15 16:03:23.062151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.528 [2024-07-15 16:03:23.062164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.528 [2024-07-15 16:03:23.062179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.528 [2024-07-15 16:03:23.062192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.528 [2024-07-15 16:03:23.062208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.528 [2024-07-15 16:03:23.062221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.528 [2024-07-15 16:03:23.062236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.528 [2024-07-15 16:03:23.062250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.528 [2024-07-15 16:03:23.062265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.528 [2024-07-15 16:03:23.062278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.528 [2024-07-15 16:03:23.062293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.528 [2024-07-15 16:03:23.062307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.528 [2024-07-15 16:03:23.062322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.528 [2024-07-15 16:03:23.062335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.528 [2024-07-15 16:03:23.062350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.528 [2024-07-15 16:03:23.062377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.528 [2024-07-15 16:03:23.062409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.528 [2024-07-15 16:03:23.062422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.528 [2024-07-15 16:03:23.062436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.528 [2024-07-15 16:03:23.062449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.528 [2024-07-15 16:03:23.062464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.528 [2024-07-15 16:03:23.062477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.528 [2024-07-15 16:03:23.062498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.528 [2024-07-15 16:03:23.062511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.528 [2024-07-15 16:03:23.062525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.528 [2024-07-15 16:03:23.062538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.528 [2024-07-15 16:03:23.062553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:80704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.528 [2024-07-15 16:03:23.062566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.528 [2024-07-15 16:03:23.062581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.528 [2024-07-15 16:03:23.062593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.528 [2024-07-15 16:03:23.062608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:80720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.528 [2024-07-15 16:03:23.062621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.528 [2024-07-15 16:03:23.062636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.528 [2024-07-15 16:03:23.062648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.528 [2024-07-15 16:03:23.062663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:80736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.528 [2024-07-15 16:03:23.062676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.528 [2024-07-15 16:03:23.062691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:80744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.528 [2024-07-15 16:03:23.062704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.528 [2024-07-15 16:03:23.062718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:80752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.528 [2024-07-15 16:03:23.062731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.528 [2024-07-15 16:03:23.062746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.528 [2024-07-15 16:03:23.062767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.528 [2024-07-15 16:03:23.062782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:80768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.528 [2024-07-15 16:03:23.062812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.528 [2024-07-15 16:03:23.062827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:80776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.528 [2024-07-15 16:03:23.062841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.528 [2024-07-15 16:03:23.062856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.528 [2024-07-15 16:03:23.062874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.528 [2024-07-15 16:03:23.062890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.528 [2024-07-15 16:03:23.062903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.528 [2024-07-15 16:03:23.062920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.528 [2024-07-15 16:03:23.062933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.528 [2024-07-15 16:03:23.062949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.062962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.062978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.063001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.063018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.063032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.063047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.063060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.063075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.063088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.063104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.063117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.063132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.063145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.063167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.063181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.063196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.063210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.063225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.063239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.063254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.063267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.063282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.063296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.063311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.063324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.063339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.063368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.063383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.063396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.063411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.063424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.063438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.063451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.063467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.063480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.063494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.063507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.063522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.063541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.063556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.063569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.063584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.063597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.063611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.063624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.063639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.063669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.063684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.063697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.063712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.063725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.063741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.063754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.063769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.063782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.063797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.063810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.063825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.063839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.063854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.063868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.063883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.063896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.063917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.063930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.063945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.063959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.063975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.063998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.064016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.064029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.064044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.064057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.064072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.064085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.064100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.064113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.064128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.064142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.064157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.064170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.064185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.529 [2024-07-15 16:03:23.064198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.529 [2024-07-15 16:03:23.064213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.530 [2024-07-15 16:03:23.064226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.530 [2024-07-15 16:03:23.064241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.530 [2024-07-15 16:03:23.064254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.530 [2024-07-15 16:03:23.064269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.530 [2024-07-15 16:03:23.064282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.530 [2024-07-15 16:03:23.064304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.530 [2024-07-15 16:03:23.064318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.530 [2024-07-15 16:03:23.064334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.530 [2024-07-15 16:03:23.064346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.530 [2024-07-15 16:03:23.064361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:80808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.530 [2024-07-15 16:03:23.064375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.530 [2024-07-15 16:03:23.064390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:80816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.530 [2024-07-15 16:03:23.064403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.530 [2024-07-15 16:03:23.064417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.530 [2024-07-15 16:03:23.064430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.530 [2024-07-15 16:03:23.064446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.530 [2024-07-15 16:03:23.064459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.530 [2024-07-15 16:03:23.064474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:80840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.530 [2024-07-15 16:03:23.064487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.530 [2024-07-15 16:03:23.064512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:80848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.530 [2024-07-15 16:03:23.064524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.530 [2024-07-15 16:03:23.064540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.530 [2024-07-15 16:03:23.064554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.530 [2024-07-15 16:03:23.064569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.530 [2024-07-15 16:03:23.064581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.530 [2024-07-15 16:03:23.064596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:80872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.530 [2024-07-15 16:03:23.064609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.530 [2024-07-15 16:03:23.064625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.530 [2024-07-15 16:03:23.064638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.530 [2024-07-15 16:03:23.064652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.530 [2024-07-15 16:03:23.064671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.530 [2024-07-15 16:03:23.064687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.530 [2024-07-15 16:03:23.064701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.530 [2024-07-15 16:03:23.064716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.530 [2024-07-15 16:03:23.064728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.530 [2024-07-15 16:03:23.064743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.530 [2024-07-15 16:03:23.064756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.530 [2024-07-15 16:03:23.064771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.530 [2024-07-15 16:03:23.064784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.530 [2024-07-15 16:03:23.064799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.530 [2024-07-15 16:03:23.064812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.530 [2024-07-15 16:03:23.064827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:80936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.530 [2024-07-15 16:03:23.064840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.530 [2024-07-15 16:03:23.064855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.530 [2024-07-15 16:03:23.064868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.530 [2024-07-15 16:03:23.064883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:80952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.530 [2024-07-15 16:03:23.064896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.530 [2024-07-15 16:03:23.064911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:80960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.530 [2024-07-15 16:03:23.064924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.530 [2024-07-15 16:03:23.064939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.530 [2024-07-15 16:03:23.064952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.530 [2024-07-15 16:03:23.064979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.530 [2024-07-15 16:03:23.064993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.530 [2024-07-15 16:03:23.065008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.530 [2024-07-15 16:03:23.065026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.530 [2024-07-15 16:03:23.065048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:80992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.530 [2024-07-15 16:03:23.065062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.530 [2024-07-15 16:03:23.065077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:81000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.530 [2024-07-15 16:03:23.065091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.530 [2024-07-15 16:03:23.065106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:81008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.530 [2024-07-15 16:03:23.065119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.530 [2024-07-15 16:03:23.065133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:81016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.530 [2024-07-15 16:03:23.065147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.530 [2024-07-15 16:03:23.065163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:81024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.530 [2024-07-15 16:03:23.065176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.530 [2024-07-15 16:03:23.065191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:81032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.530 [2024-07-15 16:03:23.065205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.530 [2024-07-15 16:03:23.065238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:40.530 [2024-07-15 16:03:23.065253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:40.530 [2024-07-15 16:03:23.065264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81040 len:8 PRP1 0x0 PRP2 0x0 00:17:40.530 [2024-07-15 16:03:23.065277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.530 [2024-07-15 16:03:23.065334] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x164bd90 was disconnected and freed. reset controller. 00:17:40.530 [2024-07-15 16:03:23.065352] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:17:40.530 [2024-07-15 16:03:23.065406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:40.530 [2024-07-15 16:03:23.065426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.530 [2024-07-15 16:03:23.065441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:40.530 [2024-07-15 16:03:23.065455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:23.065469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:40.531 [2024-07-15 16:03:23.065482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:23.065496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:40.531 [2024-07-15 16:03:23.065510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:23.065532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:40.531 [2024-07-15 16:03:23.065567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15cde30 (9): Bad file descriptor 00:17:40.531 [2024-07-15 16:03:23.069386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:40.531 [2024-07-15 16:03:23.109150] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:40.531 [2024-07-15 16:03:27.571114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:27816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.531 [2024-07-15 16:03:27.571159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:27.571187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:27824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.531 [2024-07-15 16:03:27.571203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:27.571219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:27832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.531 [2024-07-15 16:03:27.571232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:27.571248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:27840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.531 [2024-07-15 16:03:27.571262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:27.571280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:27848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.531 [2024-07-15 16:03:27.571295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:27.571312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.531 [2024-07-15 16:03:27.571326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:27.571343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:27864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.531 [2024-07-15 16:03:27.571358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:27.571374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:27872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.531 [2024-07-15 16:03:27.571389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:27.571405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:27880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.531 [2024-07-15 16:03:27.571419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:27.571436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:27888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.531 [2024-07-15 16:03:27.571450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:27.571467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:27896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.531 [2024-07-15 16:03:27.571481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:27.571521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.531 [2024-07-15 16:03:27.571537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:27.571553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:27912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.531 [2024-07-15 16:03:27.571568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:27.571584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:27920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.531 [2024-07-15 16:03:27.571599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:27.571616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:27928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.531 [2024-07-15 16:03:27.571630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:27.571646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.531 [2024-07-15 16:03:27.571661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:27.571677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.531 [2024-07-15 16:03:27.571693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:27.571709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.531 [2024-07-15 16:03:27.571724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:27.571740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:27960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.531 [2024-07-15 16:03:27.571755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:27.571771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:27968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.531 [2024-07-15 16:03:27.571785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:27.571802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:27976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.531 [2024-07-15 16:03:27.571816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:27.571832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.531 [2024-07-15 16:03:27.571846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:27.571863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:27992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.531 [2024-07-15 16:03:27.571878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:27.571894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:28000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.531 [2024-07-15 16:03:27.571917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:27.571935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.531 [2024-07-15 16:03:27.571969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:27.571990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.531 [2024-07-15 16:03:27.572005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:27.572021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.531 [2024-07-15 16:03:27.572036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:27.572053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:28032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.531 [2024-07-15 16:03:27.572067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:27.572084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:28040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.531 [2024-07-15 16:03:27.572098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:27.572114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:28048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.531 [2024-07-15 16:03:27.572129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:27.572146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.531 [2024-07-15 16:03:27.572160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:27.572176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:28064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.531 [2024-07-15 16:03:27.572191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:27.572208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:28072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.531 [2024-07-15 16:03:27.572222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:27.572239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:28080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.531 [2024-07-15 16:03:27.572253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:27.572270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:28088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.531 [2024-07-15 16:03:27.572284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:27.572301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:28096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.531 [2024-07-15 16:03:27.572315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:27.572331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.531 [2024-07-15 16:03:27.572357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:27.572375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:28112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.531 [2024-07-15 16:03:27.572390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.531 [2024-07-15 16:03:27.572406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:28120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.531 [2024-07-15 16:03:27.572426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.532 [2024-07-15 16:03:27.572444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.532 [2024-07-15 16:03:27.572458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.532 [2024-07-15 16:03:27.572475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:28136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.532 [2024-07-15 16:03:27.572495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.532 [2024-07-15 16:03:27.572511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:28144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.532 [2024-07-15 16:03:27.572526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.532 [2024-07-15 16:03:27.572542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.532 [2024-07-15 16:03:27.572557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.532 [2024-07-15 16:03:27.572573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:28160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.532 [2024-07-15 16:03:27.572588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.532 [2024-07-15 16:03:27.572604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:28168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.532 [2024-07-15 16:03:27.572618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.532 [2024-07-15 16:03:27.572635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:28176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.532 [2024-07-15 16:03:27.572649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.532 [2024-07-15 16:03:27.572665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:28184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.532 [2024-07-15 16:03:27.572680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.532 [2024-07-15 16:03:27.572696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.532 [2024-07-15 16:03:27.572710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.532 [2024-07-15 16:03:27.572727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:28200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.532 [2024-07-15 16:03:27.572742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.532 [2024-07-15 16:03:27.572766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:28208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.532 [2024-07-15 16:03:27.572781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.532 [2024-07-15 16:03:27.572798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:28216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.532 [2024-07-15 16:03:27.572813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.532 [2024-07-15 16:03:27.572829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:28224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.532 [2024-07-15 16:03:27.572843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.532 [2024-07-15 16:03:27.572860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:28232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.532 [2024-07-15 16:03:27.572874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.532 [2024-07-15 16:03:27.572890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.532 [2024-07-15 16:03:27.572905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.532 [2024-07-15 16:03:27.572921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:28248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.532 [2024-07-15 16:03:27.572940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.532 [2024-07-15 16:03:27.572967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.532 [2024-07-15 16:03:27.572984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.532 [2024-07-15 16:03:27.573000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:28264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.532 [2024-07-15 16:03:27.573019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.532 [2024-07-15 16:03:27.573041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:28272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.532 [2024-07-15 16:03:27.573055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.532 [2024-07-15 16:03:27.573071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:28280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.532 [2024-07-15 16:03:27.573086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.532 [2024-07-15 16:03:27.573103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:28288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.532 [2024-07-15 16:03:27.573117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.532 [2024-07-15 16:03:27.573133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:28296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.532 [2024-07-15 16:03:27.573147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.532 [2024-07-15 16:03:27.573164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:28304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.532 [2024-07-15 16:03:27.573185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.532 [2024-07-15 16:03:27.573202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:28312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.532 [2024-07-15 16:03:27.573216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.532 [2024-07-15 16:03:27.573233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:28320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.532 [2024-07-15 16:03:27.573247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.532 [2024-07-15 16:03:27.573263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:28328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.532 [2024-07-15 16:03:27.573277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.532 [2024-07-15 16:03:27.573294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:28336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.532 [2024-07-15 16:03:27.573308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.532 [2024-07-15 16:03:27.573325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:28344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.532 [2024-07-15 16:03:27.573339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.532 [2024-07-15 16:03:27.573356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:28352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.532 [2024-07-15 16:03:27.573370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.532 [2024-07-15 16:03:27.573386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:28360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.532 [2024-07-15 16:03:27.573400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.532 [2024-07-15 16:03:27.573416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:28368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.532 [2024-07-15 16:03:27.573430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.532 [2024-07-15 16:03:27.573446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.532 [2024-07-15 16:03:27.573465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.532 [2024-07-15 16:03:27.573482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:28384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.532 [2024-07-15 16:03:27.573496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.532 [2024-07-15 16:03:27.573512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:28392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.532 [2024-07-15 16:03:27.573531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.532 [2024-07-15 16:03:27.573548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:28400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.532 [2024-07-15 16:03:27.573562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.532 [2024-07-15 16:03:27.573585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:28408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.532 [2024-07-15 16:03:27.573600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.573616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.573631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.573647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:28424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.573661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.573678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:28432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.573692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.573707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:28440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.573721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.573738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:28448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.573752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.573768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:28456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.573782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.573798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:28464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.573812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.573829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:28472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.573843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.573871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:28480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.573886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.573902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.573916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.573931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:28496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.573944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.573968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:28504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.573987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.574011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.574026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.574042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:28520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.574060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.574076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:28528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.574090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.574105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:28536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.574119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.574134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:28544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.574147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.574163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.574181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.574197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:28560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.574210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.574225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.574238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.574254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:28576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.574267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.574282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:28584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.574296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.574311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:28592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.574325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.574340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:28600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.574353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.574369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.574394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.574410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:28616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.574424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.574440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.574453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.574468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:28632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.574482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.574498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:28640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.574512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.574527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:28648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.574545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.574560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.574574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.574589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.574603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.574618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:28672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.574631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.574647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:28680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.574661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.574676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:28688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.574689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.574705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:28696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.574718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.574733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:28704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.574747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.574768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:28712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.574782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.574797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:28720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.574811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.574827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:28728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.574840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.574855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:28736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.533 [2024-07-15 16:03:27.574869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.574902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:40.533 [2024-07-15 16:03:27.574917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28744 len:8 PRP1 0x0 PRP2 0x0 00:17:40.533 [2024-07-15 16:03:27.574930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.533 [2024-07-15 16:03:27.574949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:40.533 [2024-07-15 16:03:27.574971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:40.534 [2024-07-15 16:03:27.574983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28752 len:8 PRP1 0x0 PRP2 0x0 00:17:40.534 [2024-07-15 16:03:27.574997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.534 [2024-07-15 16:03:27.575011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:40.534 [2024-07-15 16:03:27.575022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:40.534 [2024-07-15 16:03:27.575036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28760 len:8 PRP1 0x0 PRP2 0x0 00:17:40.534 [2024-07-15 16:03:27.575050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.534 [2024-07-15 16:03:27.575063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:40.534 [2024-07-15 16:03:27.575073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:40.534 [2024-07-15 16:03:27.575084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28768 len:8 PRP1 0x0 PRP2 0x0 00:17:40.534 [2024-07-15 16:03:27.575097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.534 [2024-07-15 16:03:27.575111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:40.534 [2024-07-15 16:03:27.575121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:40.534 [2024-07-15 16:03:27.575131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28776 len:8 PRP1 0x0 PRP2 0x0 00:17:40.534 [2024-07-15 16:03:27.575145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.534 [2024-07-15 16:03:27.575158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:40.534 [2024-07-15 16:03:27.575169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:40.534 [2024-07-15 16:03:27.575187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28784 len:8 PRP1 0x0 PRP2 0x0 00:17:40.534 [2024-07-15 16:03:27.575201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.534 [2024-07-15 16:03:27.575215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:40.534 [2024-07-15 16:03:27.575225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:40.534 [2024-07-15 16:03:27.575235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28792 len:8 PRP1 0x0 PRP2 0x0 00:17:40.534 [2024-07-15 16:03:27.575249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.534 [2024-07-15 16:03:27.575262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:40.534 [2024-07-15 16:03:27.575272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:40.534 [2024-07-15 16:03:27.575282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28800 len:8 PRP1 0x0 PRP2 0x0 00:17:40.534 [2024-07-15 16:03:27.575296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.534 [2024-07-15 16:03:27.575309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:40.534 [2024-07-15 16:03:27.575319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:40.534 [2024-07-15 16:03:27.575329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28808 len:8 PRP1 0x0 PRP2 0x0 00:17:40.534 [2024-07-15 16:03:27.575342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.534 [2024-07-15 16:03:27.575356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:40.534 [2024-07-15 16:03:27.575366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:40.534 [2024-07-15 16:03:27.575377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28816 len:8 PRP1 0x0 PRP2 0x0 00:17:40.534 [2024-07-15 16:03:27.575390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.534 [2024-07-15 16:03:27.575403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:40.534 [2024-07-15 16:03:27.575413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:40.534 [2024-07-15 16:03:27.575426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28824 len:8 PRP1 0x0 PRP2 0x0 00:17:40.534 [2024-07-15 16:03:27.575440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.534 [2024-07-15 16:03:27.575454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:40.534 [2024-07-15 16:03:27.575464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:40.534 [2024-07-15 16:03:27.575474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28832 len:8 PRP1 0x0 PRP2 0x0 00:17:40.534 [2024-07-15 16:03:27.575487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.534 [2024-07-15 16:03:27.587435] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x164bb80 was disconnected and freed. reset controller. 00:17:40.534 [2024-07-15 16:03:27.587471] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:17:40.534 [2024-07-15 16:03:27.587538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:40.534 [2024-07-15 16:03:27.587560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.534 [2024-07-15 16:03:27.587591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:40.534 [2024-07-15 16:03:27.587606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.534 [2024-07-15 16:03:27.587620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:40.534 [2024-07-15 16:03:27.587633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.534 [2024-07-15 16:03:27.587647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:40.534 [2024-07-15 16:03:27.587661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.534 [2024-07-15 16:03:27.587674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:40.534 [2024-07-15 16:03:27.587730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15cde30 (9): Bad file descriptor 00:17:40.534 [2024-07-15 16:03:27.591566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:40.534 [2024-07-15 16:03:27.630109] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:40.534 00:17:40.534 Latency(us) 00:17:40.534 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.534 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:40.534 Verification LBA range: start 0x0 length 0x4000 00:17:40.534 NVMe0n1 : 15.01 9010.75 35.20 239.67 0.00 13804.62 588.33 50522.30 00:17:40.534 =================================================================================================================== 00:17:40.534 Total : 9010.75 35.20 239.67 0.00 13804.62 588.33 50522.30 00:17:40.534 Received shutdown signal, test time was about 15.000000 seconds 00:17:40.534 00:17:40.534 Latency(us) 00:17:40.534 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.534 =================================================================================================================== 00:17:40.534 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:40.534 16:03:33 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:17:40.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:40.534 16:03:33 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:17:40.534 16:03:33 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:17:40.534 16:03:33 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=88689 00:17:40.534 16:03:33 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 88689 /var/tmp/bdevperf.sock 00:17:40.534 16:03:33 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:17:40.534 16:03:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 88689 ']' 00:17:40.534 16:03:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:40.534 16:03:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:40.534 16:03:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:40.534 16:03:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:40.534 16:03:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:40.534 16:03:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:40.534 16:03:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:17:40.534 16:03:34 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:40.792 [2024-07-15 16:03:34.305848] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:40.792 16:03:34 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:41.050 [2024-07-15 16:03:34.546072] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:17:41.050 16:03:34 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:41.307 NVMe0n1 00:17:41.307 16:03:34 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:41.566 00:17:41.566 16:03:35 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:41.823 00:17:41.823 16:03:35 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:41.823 16:03:35 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:17:42.081 16:03:35 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:42.339 16:03:36 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:17:45.669 16:03:39 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:45.669 16:03:39 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:17:45.669 16:03:39 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=88818 00:17:45.669 16:03:39 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:45.669 16:03:39 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 88818 00:17:47.038 0 00:17:47.038 16:03:40 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:47.038 [2024-07-15 16:03:33.666551] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:17:47.038 [2024-07-15 16:03:33.666678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88689 ] 00:17:47.038 [2024-07-15 16:03:33.802848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.038 [2024-07-15 16:03:33.916871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.038 [2024-07-15 16:03:36.011388] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:17:47.038 [2024-07-15 16:03:36.011945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.038 [2024-07-15 16:03:36.012078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.038 [2024-07-15 16:03:36.012177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.038 [2024-07-15 16:03:36.012260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.038 [2024-07-15 16:03:36.012331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.038 [2024-07-15 16:03:36.012407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.038 [2024-07-15 16:03:36.012476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.038 [2024-07-15 16:03:36.012557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.038 [2024-07-15 16:03:36.012628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:47.038 [2024-07-15 16:03:36.012769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedbe30 (9): Bad file descriptor 00:17:47.038 [2024-07-15 16:03:36.012887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:47.038 [2024-07-15 16:03:36.023699] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:47.038 Running I/O for 1 seconds... 00:17:47.038 00:17:47.038 Latency(us) 00:17:47.038 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.038 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:47.038 Verification LBA range: start 0x0 length 0x4000 00:17:47.038 NVMe0n1 : 1.01 9250.49 36.13 0.00 0.00 13769.80 2159.71 14000.87 00:17:47.039 =================================================================================================================== 00:17:47.039 Total : 9250.49 36.13 0.00 0.00 13769.80 2159.71 14000.87 00:17:47.039 16:03:40 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:17:47.039 16:03:40 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:47.039 16:03:40 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:47.295 16:03:40 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:47.295 16:03:40 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:17:47.552 16:03:41 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:47.809 16:03:41 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:17:51.117 16:03:44 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:51.117 16:03:44 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:17:51.117 16:03:44 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 88689 00:17:51.117 16:03:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 88689 ']' 00:17:51.117 16:03:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 88689 00:17:51.117 16:03:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:17:51.117 16:03:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:51.117 16:03:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88689 00:17:51.117 killing process with pid 88689 00:17:51.117 16:03:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:51.117 16:03:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:51.117 16:03:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88689' 00:17:51.117 16:03:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 88689 00:17:51.117 16:03:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 88689 00:17:51.373 16:03:44 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:17:51.373 16:03:44 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:51.631 16:03:45 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:17:51.631 16:03:45 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:51.631 16:03:45 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:17:51.631 16:03:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:51.631 16:03:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:17:51.631 16:03:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:51.631 16:03:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:17:51.631 16:03:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:51.631 16:03:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:51.631 rmmod nvme_tcp 00:17:51.631 rmmod nvme_fabrics 00:17:51.631 rmmod nvme_keyring 00:17:51.631 16:03:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:51.631 16:03:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:17:51.631 16:03:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:17:51.631 16:03:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 88329 ']' 00:17:51.631 16:03:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 88329 00:17:51.631 16:03:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 88329 ']' 00:17:51.631 16:03:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 88329 00:17:51.631 16:03:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:17:51.631 16:03:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:51.631 16:03:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88329 00:17:51.631 killing process with pid 88329 00:17:51.631 16:03:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:51.631 16:03:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:51.631 16:03:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88329' 00:17:51.631 16:03:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 88329 00:17:51.631 16:03:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 88329 00:17:51.890 16:03:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:51.890 16:03:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:51.890 16:03:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:51.890 16:03:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:51.890 16:03:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:51.890 16:03:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.890 16:03:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:51.890 16:03:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.890 16:03:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:51.890 00:17:51.890 real 0m32.362s 00:17:51.890 user 2m5.746s 00:17:51.890 sys 0m4.656s 00:17:51.890 16:03:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:51.890 16:03:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:51.890 ************************************ 00:17:51.890 END TEST nvmf_failover 00:17:51.890 ************************************ 00:17:52.149 16:03:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:52.149 16:03:45 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:52.149 16:03:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:52.149 16:03:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:52.149 16:03:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:52.149 ************************************ 00:17:52.149 START TEST nvmf_host_discovery 00:17:52.149 ************************************ 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:52.149 * Looking for test storage... 00:17:52.149 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:52.149 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:52.150 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:52.150 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:52.150 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:52.150 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:52.150 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:52.150 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:52.150 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:52.150 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:52.150 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:52.150 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:52.150 Cannot find device "nvmf_tgt_br" 00:17:52.150 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:17:52.150 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:52.150 Cannot find device "nvmf_tgt_br2" 00:17:52.150 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:17:52.150 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:52.150 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:52.150 Cannot find device "nvmf_tgt_br" 00:17:52.150 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:17:52.150 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:52.150 Cannot find device "nvmf_tgt_br2" 00:17:52.150 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:17:52.150 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:52.150 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:52.150 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:52.408 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:52.408 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:17:52.408 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:52.408 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:52.408 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:17:52.408 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:52.408 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:52.408 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:52.408 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:52.408 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:52.408 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:52.408 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:52.408 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:52.408 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:52.408 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:52.408 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:52.408 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:52.408 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:52.408 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:52.408 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:52.408 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:52.408 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:52.408 16:03:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:52.408 16:03:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:52.408 16:03:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:52.408 16:03:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:52.408 16:03:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:52.408 16:03:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:52.408 16:03:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:52.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:52.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:17:52.408 00:17:52.408 --- 10.0.0.2 ping statistics --- 00:17:52.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.408 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:17:52.408 16:03:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:52.408 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:52.408 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:17:52.408 00:17:52.408 --- 10.0.0.3 ping statistics --- 00:17:52.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.408 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:17:52.408 16:03:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:52.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:52.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:17:52.408 00:17:52.408 --- 10.0.0.1 ping statistics --- 00:17:52.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.408 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:17:52.408 16:03:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:52.408 16:03:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:17:52.408 16:03:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:52.408 16:03:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:52.408 16:03:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:52.408 16:03:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:52.408 16:03:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:52.408 16:03:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:52.408 16:03:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:52.408 16:03:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:17:52.408 16:03:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:52.408 16:03:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:52.408 16:03:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:52.408 16:03:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=89123 00:17:52.408 16:03:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 89123 00:17:52.408 16:03:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:52.408 16:03:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 89123 ']' 00:17:52.408 16:03:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.408 16:03:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:52.408 16:03:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.408 16:03:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:52.408 16:03:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:52.667 [2024-07-15 16:03:46.154597] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:17:52.667 [2024-07-15 16:03:46.154741] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.667 [2024-07-15 16:03:46.296144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.935 [2024-07-15 16:03:46.409771] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:52.935 [2024-07-15 16:03:46.409883] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:52.935 [2024-07-15 16:03:46.409896] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:52.935 [2024-07-15 16:03:46.409906] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:52.935 [2024-07-15 16:03:46.409914] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:52.935 [2024-07-15 16:03:46.409946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.515 16:03:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:53.515 16:03:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:17:53.515 16:03:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:53.515 16:03:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:53.515 16:03:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:53.515 16:03:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:53.515 16:03:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:53.515 16:03:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.515 16:03:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:53.515 [2024-07-15 16:03:47.221659] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:53.515 16:03:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.515 16:03:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:17:53.515 16:03:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.515 16:03:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:53.515 [2024-07-15 16:03:47.229777] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:17:53.515 16:03:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.515 16:03:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:17:53.515 16:03:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.515 16:03:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:53.515 null0 00:17:53.515 16:03:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.515 16:03:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:17:53.515 16:03:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.515 16:03:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:53.773 null1 00:17:53.773 16:03:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.773 16:03:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:17:53.773 16:03:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.773 16:03:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:53.773 16:03:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.773 16:03:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=89172 00:17:53.773 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:53.773 16:03:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 89172 /tmp/host.sock 00:17:53.773 16:03:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 89172 ']' 00:17:53.773 16:03:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:17:53.773 16:03:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:17:53.773 16:03:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:53.773 16:03:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:53.773 16:03:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:53.773 16:03:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:53.773 [2024-07-15 16:03:47.319409] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:17:53.773 [2024-07-15 16:03:47.319517] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89172 ] 00:17:53.773 [2024-07-15 16:03:47.462685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.031 [2024-07-15 16:03:47.576286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.598 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:54.598 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:17:54.598 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:54.598 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:17:54.598 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.598 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:54.598 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.598 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:17:54.598 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.598 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:54.857 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:55.116 [2024-07-15 16:03:48.690189] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:55.116 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.374 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:55.374 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:17:55.374 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:55.374 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:55.374 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:17:55.374 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.374 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:55.374 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.374 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:55.374 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:55.374 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:55.374 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:55.374 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:55.374 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:17:55.374 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:55.374 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.374 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:55.374 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:55.374 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:55.374 16:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:55.374 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.375 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:17:55.375 16:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:17:55.632 [2024-07-15 16:03:49.334862] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:55.632 [2024-07-15 16:03:49.334908] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:55.632 [2024-07-15 16:03:49.334944] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:55.890 [2024-07-15 16:03:49.421984] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:17:55.890 [2024-07-15 16:03:49.487124] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:55.890 [2024-07-15 16:03:49.487173] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:56.454 16:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:56.454 16:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:56.454 16:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:17:56.454 16:03:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:56.454 16:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.454 16:03:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:56.454 16:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:56.454 16:03:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:56.454 16:03:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:56.454 16:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.454 16:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.454 16:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:56.454 16:03:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:17:56.454 16:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:17:56.454 16:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:56.454 16:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:56.454 16:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:17:56.454 16:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:17:56.454 16:03:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:56.454 16:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.454 16:03:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:56.454 16:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:56.454 16:03:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:56.454 16:03:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:56.454 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:56.711 [2024-07-15 16:03:50.282784] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:56.711 [2024-07-15 16:03:50.283661] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:17:56.711 [2024-07-15 16:03:50.283716] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:56.711 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:56.712 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:56.712 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:56.712 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:17:56.712 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:56.712 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:56.712 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:56.712 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:56.712 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.712 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:56.712 [2024-07-15 16:03:50.369730] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:17:56.712 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.712 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:56.712 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:56.712 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:17:56.712 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:17:56.712 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:56.712 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:56.712 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:17:56.712 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:17:56.712 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:56.712 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.712 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:56.712 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:56.712 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:56.712 16:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:56.712 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.712 [2024-07-15 16:03:50.428140] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:56.712 [2024-07-15 16:03:50.428170] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:56.712 [2024-07-15 16:03:50.428177] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:56.969 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:17:56.969 16:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:57.901 [2024-07-15 16:03:51.580066] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:17:57.901 [2024-07-15 16:03:51.580238] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:57.901 [2024-07-15 16:03:51.585025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:57.901 [2024-07-15 16:03:51.585062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.901 [2024-07-15 16:03:51.585076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.901 [2024-07-15 16:03:51.585087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.901 [2024-07-15 16:03:51.585097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.901 [2024-07-15 16:03:51.585106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.901 [2024-07-15 16:03:51.585116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.901 [2024-07-15 16:03:51.585125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.901 [2024-07-15 16:03:51.585135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cbc70 is same with the state(5) to be set 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:57.901 [2024-07-15 16:03:51.594981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cbc70 (9): Bad file descriptor 00:17:57.901 [2024-07-15 16:03:51.605003] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:57.901 [2024-07-15 16:03:51.605126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:57.901 [2024-07-15 16:03:51.605150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cbc70 with addr=10.0.0.2, port=4420 00:17:57.901 [2024-07-15 16:03:51.605162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cbc70 is same with the state(5) to be set 00:17:57.901 [2024-07-15 16:03:51.605180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cbc70 (9): Bad file descriptor 00:17:57.901 [2024-07-15 16:03:51.605195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:57.901 [2024-07-15 16:03:51.605205] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:57.901 [2024-07-15 16:03:51.605217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:57.901 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.901 [2024-07-15 16:03:51.605244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:57.901 [2024-07-15 16:03:51.615062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:57.901 [2024-07-15 16:03:51.615157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:57.901 [2024-07-15 16:03:51.615179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cbc70 with addr=10.0.0.2, port=4420 00:17:57.901 [2024-07-15 16:03:51.615190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cbc70 is same with the state(5) to be set 00:17:57.901 [2024-07-15 16:03:51.615206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cbc70 (9): Bad file descriptor 00:17:57.901 [2024-07-15 16:03:51.615220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:57.901 [2024-07-15 16:03:51.615229] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:57.901 [2024-07-15 16:03:51.615238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:57.901 [2024-07-15 16:03:51.615263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:57.901 [2024-07-15 16:03:51.625124] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:57.901 [2024-07-15 16:03:51.625220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:57.901 [2024-07-15 16:03:51.625242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cbc70 with addr=10.0.0.2, port=4420 00:17:57.901 [2024-07-15 16:03:51.625253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cbc70 is same with the state(5) to be set 00:17:57.901 [2024-07-15 16:03:51.625276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cbc70 (9): Bad file descriptor 00:17:57.901 [2024-07-15 16:03:51.625319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:57.901 [2024-07-15 16:03:51.625331] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:57.901 [2024-07-15 16:03:51.625341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:57.901 [2024-07-15 16:03:51.625355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:58.159 [2024-07-15 16:03:51.635185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:58.159 [2024-07-15 16:03:51.635270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:58.159 [2024-07-15 16:03:51.635297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cbc70 with addr=10.0.0.2, port=4420 00:17:58.159 [2024-07-15 16:03:51.635308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cbc70 is same with the state(5) to be set 00:17:58.159 [2024-07-15 16:03:51.635324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cbc70 (9): Bad file descriptor 00:17:58.159 [2024-07-15 16:03:51.635348] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:58.159 [2024-07-15 16:03:51.635358] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:58.159 [2024-07-15 16:03:51.635368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:58.159 [2024-07-15 16:03:51.635382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:58.159 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.159 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:58.159 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:58.159 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:58.159 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:58.159 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:58.159 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:58.159 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:17:58.159 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:58.159 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.159 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:58.159 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:58.159 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:58.159 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:58.159 [2024-07-15 16:03:51.645240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:58.159 [2024-07-15 16:03:51.645314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:58.159 [2024-07-15 16:03:51.645334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cbc70 with addr=10.0.0.2, port=4420 00:17:58.159 [2024-07-15 16:03:51.645345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cbc70 is same with the state(5) to be set 00:17:58.159 [2024-07-15 16:03:51.645360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cbc70 (9): Bad file descriptor 00:17:58.159 [2024-07-15 16:03:51.645384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:58.159 [2024-07-15 16:03:51.645393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:58.159 [2024-07-15 16:03:51.645402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:58.159 [2024-07-15 16:03:51.645416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:58.159 [2024-07-15 16:03:51.655287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:58.159 [2024-07-15 16:03:51.655382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:58.159 [2024-07-15 16:03:51.655404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cbc70 with addr=10.0.0.2, port=4420 00:17:58.159 [2024-07-15 16:03:51.655416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cbc70 is same with the state(5) to be set 00:17:58.159 [2024-07-15 16:03:51.655433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cbc70 (9): Bad file descriptor 00:17:58.159 [2024-07-15 16:03:51.655447] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:58.159 [2024-07-15 16:03:51.655456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:58.159 [2024-07-15 16:03:51.655465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:58.159 [2024-07-15 16:03:51.655480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:58.159 [2024-07-15 16:03:51.665343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:58.160 [2024-07-15 16:03:51.665427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:58.160 [2024-07-15 16:03:51.665447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cbc70 with addr=10.0.0.2, port=4420 00:17:58.160 [2024-07-15 16:03:51.665458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cbc70 is same with the state(5) to be set 00:17:58.160 [2024-07-15 16:03:51.665474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cbc70 (9): Bad file descriptor 00:17:58.160 [2024-07-15 16:03:51.665488] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:58.160 [2024-07-15 16:03:51.665497] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:58.160 [2024-07-15 16:03:51.665506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:58.160 [2024-07-15 16:03:51.665521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:58.160 [2024-07-15 16:03:51.666800] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:17:58.160 [2024-07-15 16:03:51.666833] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:58.160 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.418 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:17:58.418 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:58.418 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:17:58.418 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:17:58.418 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:58.418 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:58.418 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:58.418 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:58.418 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:58.418 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:58.418 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:58.418 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.418 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:58.418 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:58.418 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.418 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:17:58.418 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:17:58.418 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:58.418 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:58.418 16:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:58.418 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.418 16:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:59.351 [2024-07-15 16:03:52.995394] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:59.351 [2024-07-15 16:03:52.995432] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:59.351 [2024-07-15 16:03:52.995467] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:59.609 [2024-07-15 16:03:53.081502] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:17:59.609 [2024-07-15 16:03:53.142091] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:59.609 [2024-07-15 16:03:53.142139] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:59.609 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.609 16:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:59.609 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:17:59.609 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:59.609 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:59.609 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:59.609 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:59.609 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:59.609 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:59.609 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.609 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:59.609 2024/07/15 16:03:53 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:17:59.609 request: 00:17:59.609 { 00:17:59.609 "method": "bdev_nvme_start_discovery", 00:17:59.609 "params": { 00:17:59.609 "name": "nvme", 00:17:59.609 "trtype": "tcp", 00:17:59.609 "traddr": "10.0.0.2", 00:17:59.609 "adrfam": "ipv4", 00:17:59.609 "trsvcid": "8009", 00:17:59.609 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:59.609 "wait_for_attach": true 00:17:59.609 } 00:17:59.609 } 00:17:59.609 Got JSON-RPC error response 00:17:59.609 GoRPCClient: error on JSON-RPC call 00:17:59.609 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:59.609 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:17:59.609 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:59.609 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:59.609 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:59.609 16:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:17:59.609 16:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:59.609 16:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:59.609 16:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:59.609 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.609 16:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:59.609 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:59.609 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.609 16:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:17:59.609 16:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:17:59.609 16:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:59.609 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.609 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:59.609 16:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:59.609 16:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:59.609 16:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:59.609 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.609 16:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:59.609 16:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:59.609 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:17:59.610 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:59.610 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:59.610 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:59.610 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:59.610 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:59.610 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:59.610 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.610 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:59.610 2024/07/15 16:03:53 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:17:59.610 request: 00:17:59.610 { 00:17:59.610 "method": "bdev_nvme_start_discovery", 00:17:59.610 "params": { 00:17:59.610 "name": "nvme_second", 00:17:59.610 "trtype": "tcp", 00:17:59.610 "traddr": "10.0.0.2", 00:17:59.610 "adrfam": "ipv4", 00:17:59.610 "trsvcid": "8009", 00:17:59.610 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:59.610 "wait_for_attach": true 00:17:59.610 } 00:17:59.610 } 00:17:59.610 Got JSON-RPC error response 00:17:59.610 GoRPCClient: error on JSON-RPC call 00:17:59.610 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:59.610 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:17:59.610 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:59.610 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:59.610 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:59.610 16:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:17:59.610 16:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:59.610 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.610 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:59.610 16:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:59.610 16:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:59.610 16:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:59.610 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.867 16:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:17:59.867 16:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:17:59.867 16:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:59.867 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.867 16:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:59.867 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:59.867 16:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:59.867 16:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:59.867 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.867 16:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:59.867 16:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:59.867 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:17:59.867 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:59.867 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:59.867 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:59.867 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:59.867 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:59.867 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:59.867 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.867 16:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:00.799 [2024-07-15 16:03:54.426746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:00.799 [2024-07-15 16:03:54.426812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1905660 with addr=10.0.0.2, port=8010 00:18:00.799 [2024-07-15 16:03:54.426838] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:00.799 [2024-07-15 16:03:54.426848] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:00.799 [2024-07-15 16:03:54.426858] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:18:01.733 [2024-07-15 16:03:55.426753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:01.733 [2024-07-15 16:03:55.426860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f8500 with addr=10.0.0.2, port=8010 00:18:01.733 [2024-07-15 16:03:55.426886] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:01.733 [2024-07-15 16:03:55.426897] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:01.733 [2024-07-15 16:03:55.426907] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:18:02.713 [2024-07-15 16:03:56.426606] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:18:02.713 2024/07/15 16:03:56 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:18:02.713 request: 00:18:02.713 { 00:18:02.713 "method": "bdev_nvme_start_discovery", 00:18:02.713 "params": { 00:18:02.713 "name": "nvme_second", 00:18:02.713 "trtype": "tcp", 00:18:02.713 "traddr": "10.0.0.2", 00:18:02.713 "adrfam": "ipv4", 00:18:02.713 "trsvcid": "8010", 00:18:02.713 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:02.713 "wait_for_attach": false, 00:18:02.713 "attach_timeout_ms": 3000 00:18:02.713 } 00:18:02.713 } 00:18:02.713 Got JSON-RPC error response 00:18:02.713 GoRPCClient: error on JSON-RPC call 00:18:02.713 16:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:02.713 16:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:18:02.713 16:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:02.713 16:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:02.713 16:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:02.713 16:03:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:18:02.971 16:03:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:02.971 16:03:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:02.971 16:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.971 16:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:02.971 16:03:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:02.971 16:03:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:02.971 16:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.971 16:03:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:18:02.971 16:03:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:18:02.971 16:03:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 89172 00:18:02.971 16:03:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:18:02.971 16:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:02.971 16:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:18:02.971 16:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:02.971 16:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:18:02.971 16:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:02.971 16:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:02.971 rmmod nvme_tcp 00:18:02.971 rmmod nvme_fabrics 00:18:02.971 rmmod nvme_keyring 00:18:02.971 16:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:02.971 16:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:18:02.971 16:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:18:02.971 16:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 89123 ']' 00:18:02.971 16:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 89123 00:18:02.971 16:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 89123 ']' 00:18:02.971 16:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 89123 00:18:02.971 16:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:18:02.971 16:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:02.971 16:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89123 00:18:02.971 killing process with pid 89123 00:18:02.971 16:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:02.971 16:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:02.971 16:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89123' 00:18:02.971 16:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 89123 00:18:02.971 16:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 89123 00:18:03.229 16:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:03.229 16:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:03.229 16:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:03.229 16:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:03.229 16:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:03.229 16:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.229 16:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:03.229 16:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.229 16:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:03.229 00:18:03.229 real 0m11.256s 00:18:03.229 user 0m22.234s 00:18:03.229 sys 0m1.663s 00:18:03.229 16:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:03.229 16:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:03.229 ************************************ 00:18:03.229 END TEST nvmf_host_discovery 00:18:03.229 ************************************ 00:18:03.229 16:03:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:03.229 16:03:56 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:03.229 16:03:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:03.229 16:03:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:03.229 16:03:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:03.229 ************************************ 00:18:03.229 START TEST nvmf_host_multipath_status 00:18:03.229 ************************************ 00:18:03.229 16:03:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:03.487 * Looking for test storage... 00:18:03.487 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:03.487 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:03.487 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:18:03.487 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:03.487 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:03.487 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:03.487 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:03.487 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:03.487 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:03.487 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:03.487 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:03.487 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:03.487 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:03.487 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:18:03.487 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:18:03.487 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:03.487 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:03.487 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:03.487 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:03.487 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:03.487 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:03.487 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:03.487 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:03.488 Cannot find device "nvmf_tgt_br" 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:03.488 Cannot find device "nvmf_tgt_br2" 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:03.488 Cannot find device "nvmf_tgt_br" 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:03.488 Cannot find device "nvmf_tgt_br2" 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:03.488 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:03.488 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:03.488 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:03.746 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:03.746 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:03.746 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:03.746 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:03.746 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:03.746 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:03.746 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:03.746 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:03.746 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:03.746 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:03.746 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:03.746 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:03.746 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:03.746 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:03.746 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:03.746 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:03.746 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:03.746 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:03.746 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:03.746 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:03.746 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:18:03.746 00:18:03.746 --- 10.0.0.2 ping statistics --- 00:18:03.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.746 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:18:03.746 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:03.746 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:03.746 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:18:03.746 00:18:03.746 --- 10.0.0.3 ping statistics --- 00:18:03.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.746 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:18:03.746 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:03.746 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:03.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:18:03.747 00:18:03.747 --- 10.0.0.1 ping statistics --- 00:18:03.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.747 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:18:03.747 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:03.747 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:18:03.747 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:03.747 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:03.747 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:03.747 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:03.747 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:03.747 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:03.747 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:03.747 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:18:03.747 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:03.747 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:03.747 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:03.747 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=89655 00:18:03.747 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 89655 00:18:03.747 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:03.747 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 89655 ']' 00:18:03.747 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.747 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:03.747 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.747 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:03.747 16:03:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:03.747 [2024-07-15 16:03:57.451068] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:18:03.747 [2024-07-15 16:03:57.451180] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:04.005 [2024-07-15 16:03:57.592582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:04.005 [2024-07-15 16:03:57.693782] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:04.005 [2024-07-15 16:03:57.693851] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:04.005 [2024-07-15 16:03:57.693862] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:04.005 [2024-07-15 16:03:57.693871] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:04.005 [2024-07-15 16:03:57.693905] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:04.005 [2024-07-15 16:03:57.694268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.005 [2024-07-15 16:03:57.694361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.938 16:03:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:04.938 16:03:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:18:04.938 16:03:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:04.938 16:03:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:04.938 16:03:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:04.938 16:03:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:04.938 16:03:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=89655 00:18:04.938 16:03:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:04.938 [2024-07-15 16:03:58.665491] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:05.195 16:03:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:05.452 Malloc0 00:18:05.452 16:03:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:05.710 16:03:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:05.967 16:03:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:06.231 [2024-07-15 16:03:59.787493] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:06.231 16:03:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:06.496 [2024-07-15 16:04:00.023647] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:06.496 16:04:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=89753 00:18:06.496 16:04:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:06.496 16:04:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:06.496 16:04:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 89753 /var/tmp/bdevperf.sock 00:18:06.496 16:04:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 89753 ']' 00:18:06.496 16:04:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:06.496 16:04:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:06.496 16:04:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:06.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:06.496 16:04:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:06.496 16:04:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:07.426 16:04:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:07.426 16:04:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:18:07.426 16:04:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:07.683 16:04:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:18:07.940 Nvme0n1 00:18:08.198 16:04:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:08.456 Nvme0n1 00:18:08.456 16:04:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:18:08.456 16:04:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:11.004 16:04:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:18:11.004 16:04:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:18:11.004 16:04:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:11.004 16:04:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:18:12.376 16:04:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:18:12.376 16:04:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:12.376 16:04:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:12.376 16:04:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:12.376 16:04:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:12.376 16:04:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:12.376 16:04:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:12.376 16:04:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:12.634 16:04:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:12.634 16:04:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:12.634 16:04:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:12.634 16:04:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:12.891 16:04:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:12.891 16:04:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:12.891 16:04:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:12.891 16:04:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:13.149 16:04:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:13.149 16:04:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:13.149 16:04:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:13.149 16:04:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:13.406 16:04:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:13.407 16:04:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:13.407 16:04:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:13.407 16:04:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:13.664 16:04:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:13.664 16:04:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:18:13.664 16:04:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:13.922 16:04:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:14.179 16:04:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:18:15.112 16:04:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:18:15.112 16:04:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:15.112 16:04:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:15.112 16:04:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:15.370 16:04:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:15.370 16:04:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:15.370 16:04:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:15.370 16:04:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:15.627 16:04:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:15.627 16:04:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:15.627 16:04:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:15.627 16:04:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:15.885 16:04:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:15.885 16:04:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:15.885 16:04:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:15.885 16:04:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:16.143 16:04:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:16.143 16:04:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:16.143 16:04:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:16.143 16:04:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:16.401 16:04:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:16.401 16:04:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:16.401 16:04:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:16.401 16:04:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:16.967 16:04:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:16.967 16:04:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:18:16.967 16:04:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:16.967 16:04:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:18:17.225 16:04:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:18:18.259 16:04:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:18:18.259 16:04:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:18.259 16:04:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:18.259 16:04:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:18.517 16:04:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:18.517 16:04:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:18.517 16:04:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:18.517 16:04:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:18.774 16:04:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:18.774 16:04:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:18.774 16:04:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:18.774 16:04:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:19.032 16:04:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:19.032 16:04:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:19.032 16:04:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:19.032 16:04:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:19.290 16:04:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:19.290 16:04:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:19.290 16:04:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:19.290 16:04:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:19.548 16:04:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:19.548 16:04:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:19.548 16:04:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:19.548 16:04:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:20.113 16:04:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:20.113 16:04:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:18:20.113 16:04:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:20.370 16:04:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:20.628 16:04:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:18:21.561 16:04:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:18:21.561 16:04:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:21.561 16:04:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:21.561 16:04:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:21.818 16:04:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:21.818 16:04:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:21.818 16:04:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:21.818 16:04:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:22.076 16:04:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:22.076 16:04:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:22.076 16:04:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:22.076 16:04:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:22.334 16:04:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:22.334 16:04:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:22.334 16:04:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:22.334 16:04:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:22.591 16:04:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:22.591 16:04:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:22.591 16:04:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:22.591 16:04:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:22.848 16:04:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:22.848 16:04:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:22.848 16:04:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:22.848 16:04:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:23.106 16:04:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:23.106 16:04:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:18:23.106 16:04:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:23.369 16:04:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:23.645 16:04:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:18:24.578 16:04:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:18:24.578 16:04:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:24.578 16:04:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:24.578 16:04:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:24.835 16:04:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:24.835 16:04:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:24.835 16:04:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:24.835 16:04:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:25.092 16:04:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:25.092 16:04:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:25.092 16:04:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:25.092 16:04:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:25.350 16:04:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:25.350 16:04:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:25.350 16:04:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:25.350 16:04:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:25.608 16:04:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:25.608 16:04:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:18:25.608 16:04:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:25.608 16:04:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:25.866 16:04:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:25.866 16:04:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:25.866 16:04:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:25.866 16:04:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:26.123 16:04:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:26.123 16:04:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:18:26.123 16:04:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:26.379 16:04:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:26.636 16:04:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:18:27.593 16:04:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:18:27.593 16:04:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:27.593 16:04:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:27.593 16:04:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:27.851 16:04:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:27.851 16:04:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:27.851 16:04:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:27.851 16:04:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:28.108 16:04:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:28.108 16:04:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:28.108 16:04:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:28.108 16:04:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:28.366 16:04:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:28.366 16:04:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:28.366 16:04:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:28.366 16:04:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:28.624 16:04:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:28.624 16:04:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:18:28.624 16:04:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:28.624 16:04:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:28.883 16:04:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:28.883 16:04:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:28.883 16:04:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:28.883 16:04:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:29.141 16:04:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:29.141 16:04:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:18:29.399 16:04:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:18:29.399 16:04:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:18:29.657 16:04:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:29.916 16:04:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:18:30.848 16:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:18:30.848 16:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:30.848 16:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:30.848 16:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:31.105 16:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:31.105 16:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:31.105 16:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:31.105 16:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:31.362 16:04:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:31.362 16:04:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:31.362 16:04:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:31.362 16:04:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:31.620 16:04:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:31.620 16:04:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:31.620 16:04:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:31.620 16:04:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:31.878 16:04:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:31.878 16:04:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:31.878 16:04:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:31.878 16:04:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:32.136 16:04:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:32.136 16:04:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:32.136 16:04:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:32.136 16:04:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:32.415 16:04:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:32.415 16:04:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:18:32.415 16:04:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:32.722 16:04:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:32.980 16:04:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:18:33.913 16:04:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:18:33.913 16:04:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:33.913 16:04:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:33.913 16:04:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:34.171 16:04:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:34.171 16:04:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:34.171 16:04:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:34.171 16:04:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:34.429 16:04:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:34.429 16:04:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:34.429 16:04:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:34.429 16:04:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:34.686 16:04:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:34.686 16:04:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:34.686 16:04:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:34.686 16:04:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:34.944 16:04:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:34.944 16:04:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:34.944 16:04:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:34.944 16:04:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:35.203 16:04:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:35.203 16:04:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:35.203 16:04:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:35.203 16:04:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:35.461 16:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:35.461 16:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:18:35.461 16:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:35.720 16:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:18:35.977 16:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:18:37.351 16:04:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:18:37.351 16:04:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:37.351 16:04:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:37.351 16:04:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:37.351 16:04:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:37.351 16:04:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:37.351 16:04:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:37.351 16:04:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:37.609 16:04:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:37.609 16:04:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:37.609 16:04:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:37.609 16:04:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:37.867 16:04:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:37.867 16:04:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:37.867 16:04:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:37.867 16:04:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:38.433 16:04:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:38.433 16:04:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:38.433 16:04:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:38.433 16:04:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:38.433 16:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:38.433 16:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:38.433 16:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:38.433 16:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:39.000 16:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:39.000 16:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:18:39.000 16:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:39.000 16:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:39.258 16:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:18:40.631 16:04:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:18:40.631 16:04:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:40.631 16:04:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:40.631 16:04:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:40.631 16:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:40.631 16:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:40.631 16:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:40.631 16:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:40.889 16:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:40.889 16:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:40.889 16:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:40.889 16:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:41.146 16:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:41.146 16:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:41.146 16:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:41.147 16:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:41.404 16:04:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:41.404 16:04:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:41.404 16:04:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:41.404 16:04:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:41.662 16:04:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:41.662 16:04:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:41.662 16:04:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:41.662 16:04:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:41.920 16:04:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:41.920 16:04:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 89753 00:18:41.920 16:04:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 89753 ']' 00:18:41.921 16:04:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 89753 00:18:41.921 16:04:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:18:41.921 16:04:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:41.921 16:04:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89753 00:18:41.921 killing process with pid 89753 00:18:41.921 16:04:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:41.921 16:04:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:41.921 16:04:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89753' 00:18:41.921 16:04:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 89753 00:18:41.921 16:04:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 89753 00:18:42.205 Connection closed with partial response: 00:18:42.205 00:18:42.205 00:18:42.206 16:04:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 89753 00:18:42.206 16:04:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:42.206 [2024-07-15 16:04:00.088015] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:18:42.206 [2024-07-15 16:04:00.088120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89753 ] 00:18:42.206 [2024-07-15 16:04:00.220327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.206 [2024-07-15 16:04:00.319865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:42.206 Running I/O for 90 seconds... 00:18:42.206 [2024-07-15 16:04:16.866433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:34760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.206 [2024-07-15 16:04:16.866511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:42.206 [2024-07-15 16:04:16.866589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:34768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.206 [2024-07-15 16:04:16.866611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:42.206 [2024-07-15 16:04:16.866634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.206 [2024-07-15 16:04:16.866649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:42.206 [2024-07-15 16:04:16.866671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.206 [2024-07-15 16:04:16.866686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:42.206 [2024-07-15 16:04:16.866707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:34792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.206 [2024-07-15 16:04:16.866722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:42.206 [2024-07-15 16:04:16.866743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:34800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.206 [2024-07-15 16:04:16.866757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:42.206 [2024-07-15 16:04:16.866779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:34808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.206 [2024-07-15 16:04:16.866793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:42.206 [2024-07-15 16:04:16.866813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:34816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.206 [2024-07-15 16:04:16.866828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:42.206 [2024-07-15 16:04:16.866850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.206 [2024-07-15 16:04:16.866865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:42.206 [2024-07-15 16:04:16.866886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:34832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.206 [2024-07-15 16:04:16.866901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:42.206 [2024-07-15 16:04:16.866921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:34840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.206 [2024-07-15 16:04:16.866975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:42.206 [2024-07-15 16:04:16.867014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.206 [2024-07-15 16:04:16.867032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:42.206 [2024-07-15 16:04:16.867054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:34856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.206 [2024-07-15 16:04:16.867074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:42.206 [2024-07-15 16:04:16.867096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:34864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.206 [2024-07-15 16:04:16.867111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:42.206 [2024-07-15 16:04:16.867132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:34872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.206 [2024-07-15 16:04:16.867146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:42.206 [2024-07-15 16:04:16.867167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:34880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.206 [2024-07-15 16:04:16.867183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:42.206 [2024-07-15 16:04:16.867206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:34888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.206 [2024-07-15 16:04:16.867221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:42.206 [2024-07-15 16:04:16.867244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.206 [2024-07-15 16:04:16.867259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:42.206 [2024-07-15 16:04:16.867280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:34904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.206 [2024-07-15 16:04:16.867295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.206 [2024-07-15 16:04:16.867317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:34912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.206 [2024-07-15 16:04:16.867332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:42.206 [2024-07-15 16:04:16.867353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.206 [2024-07-15 16:04:16.867368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:42.206 [2024-07-15 16:04:16.867389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:34928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.206 [2024-07-15 16:04:16.867404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:42.206 [2024-07-15 16:04:16.867425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:34936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.206 [2024-07-15 16:04:16.867440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:42.206 [2024-07-15 16:04:16.867472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:34944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.206 [2024-07-15 16:04:16.867489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:42.206 [2024-07-15 16:04:16.867511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:34952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.206 [2024-07-15 16:04:16.867526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:42.206 [2024-07-15 16:04:16.867548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:34960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.206 [2024-07-15 16:04:16.867563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:42.206 [2024-07-15 16:04:16.867584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.206 [2024-07-15 16:04:16.867600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:42.206 [2024-07-15 16:04:16.867622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:34976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.206 [2024-07-15 16:04:16.867637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:42.206 [2024-07-15 16:04:16.867659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:34984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.206 [2024-07-15 16:04:16.867673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:42.206 [2024-07-15 16:04:16.867695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.206 [2024-07-15 16:04:16.867710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:42.206 [2024-07-15 16:04:16.867731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.206 [2024-07-15 16:04:16.867746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:42.206 [2024-07-15 16:04:16.867767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:34504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.206 [2024-07-15 16:04:16.867782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:42.206 [2024-07-15 16:04:16.867805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:34512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.206 [2024-07-15 16:04:16.867820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:42.206 [2024-07-15 16:04:16.870279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:34520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.206 [2024-07-15 16:04:16.870307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:42.206 [2024-07-15 16:04:16.870341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:34528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.206 [2024-07-15 16:04:16.870358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:42.206 [2024-07-15 16:04:16.870400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:34536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.206 [2024-07-15 16:04:16.870418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:42.206 [2024-07-15 16:04:16.870448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:34544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.206 [2024-07-15 16:04:16.870463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:42.206 [2024-07-15 16:04:16.870492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:34552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.206 [2024-07-15 16:04:16.870508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:42.206 [2024-07-15 16:04:16.870537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:34560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.207 [2024-07-15 16:04:16.870552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:16.870582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.207 [2024-07-15 16:04:16.870598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:16.870627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:34576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.207 [2024-07-15 16:04:16.870643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:16.870672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.207 [2024-07-15 16:04:16.870688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:16.870716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:34592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.207 [2024-07-15 16:04:16.870732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:16.870761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.207 [2024-07-15 16:04:16.870776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:16.870806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.207 [2024-07-15 16:04:16.870821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:16.870850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:34616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.207 [2024-07-15 16:04:16.870865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:16.870904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:34624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.207 [2024-07-15 16:04:16.870921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:16.870950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:34632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.207 [2024-07-15 16:04:16.870991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:16.871024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:34640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.207 [2024-07-15 16:04:16.871040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:16.871070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:34648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.207 [2024-07-15 16:04:16.871086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:16.871115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:34656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.207 [2024-07-15 16:04:16.871130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:16.871159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:34664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.207 [2024-07-15 16:04:16.871175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:16.871204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:34672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.207 [2024-07-15 16:04:16.871220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:16.871249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.207 [2024-07-15 16:04:16.871264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:16.871293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:34688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.207 [2024-07-15 16:04:16.871309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:16.871338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:34696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.207 [2024-07-15 16:04:16.871353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:16.871382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.207 [2024-07-15 16:04:16.871397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:16.871426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.207 [2024-07-15 16:04:16.871441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:16.871470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:34720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.207 [2024-07-15 16:04:16.871485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:16.871515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:34728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.207 [2024-07-15 16:04:16.871538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:16.871569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:34736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.207 [2024-07-15 16:04:16.871585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:16.871614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:34744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.207 [2024-07-15 16:04:16.871629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:16.871739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:34752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.207 [2024-07-15 16:04:16.871760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:32.911613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:112632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.207 [2024-07-15 16:04:32.911679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:32.911731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:112648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.207 [2024-07-15 16:04:32.911749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:32.911771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:112664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.207 [2024-07-15 16:04:32.911785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:32.911806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:112680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.207 [2024-07-15 16:04:32.911820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:32.911840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.207 [2024-07-15 16:04:32.911855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:32.911875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:112208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.207 [2024-07-15 16:04:32.911889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:32.911910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.207 [2024-07-15 16:04:32.911924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:32.911945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:112272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.207 [2024-07-15 16:04:32.911959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:32.911991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:112216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.207 [2024-07-15 16:04:32.912008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:32.912057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.207 [2024-07-15 16:04:32.912073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:32.912093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:112704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.207 [2024-07-15 16:04:32.912107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:32.912127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.207 [2024-07-15 16:04:32.912140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:32.912160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:112736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.207 [2024-07-15 16:04:32.912174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:32.912193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.207 [2024-07-15 16:04:32.912207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:32.912227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.207 [2024-07-15 16:04:32.912240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:32.912260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.207 [2024-07-15 16:04:32.912274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:42.207 [2024-07-15 16:04:32.912294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:112800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.208 [2024-07-15 16:04:32.912308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.912329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:112816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.208 [2024-07-15 16:04:32.912343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.912363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:112832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.208 [2024-07-15 16:04:32.912376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.912396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.208 [2024-07-15 16:04:32.912410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.912430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:112864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.208 [2024-07-15 16:04:32.912444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.912474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.208 [2024-07-15 16:04:32.912489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.912756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.208 [2024-07-15 16:04:32.912780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.912803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.208 [2024-07-15 16:04:32.912819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.912840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:112360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.208 [2024-07-15 16:04:32.912855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.912877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.208 [2024-07-15 16:04:32.912892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.912914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:112424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.208 [2024-07-15 16:04:32.912928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.912949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:112896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.208 [2024-07-15 16:04:32.912964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.913001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:112912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.208 [2024-07-15 16:04:32.913020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.913042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:112928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.208 [2024-07-15 16:04:32.913065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.913086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.208 [2024-07-15 16:04:32.913102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.913123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:112960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.208 [2024-07-15 16:04:32.913139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.913161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:112976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.208 [2024-07-15 16:04:32.913175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.913210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:112992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.208 [2024-07-15 16:04:32.913227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.914333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.208 [2024-07-15 16:04:32.914362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.914390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.208 [2024-07-15 16:04:32.914408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.914430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.208 [2024-07-15 16:04:32.914446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.914468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.208 [2024-07-15 16:04:32.914482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.914503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:113072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.208 [2024-07-15 16:04:32.914518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.914540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.208 [2024-07-15 16:04:32.914555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.914576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.208 [2024-07-15 16:04:32.914591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.914613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:112472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.208 [2024-07-15 16:04:32.914628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.914649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:112504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.208 [2024-07-15 16:04:32.914664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.914686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:112536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.208 [2024-07-15 16:04:32.914700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.914721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:112568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.208 [2024-07-15 16:04:32.914736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.914758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:112600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.208 [2024-07-15 16:04:32.914785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.914808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:112288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.208 [2024-07-15 16:04:32.914824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.914845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:112320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.208 [2024-07-15 16:04:32.914860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.914881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.208 [2024-07-15 16:04:32.914896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.914918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.208 [2024-07-15 16:04:32.914934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.914969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.208 [2024-07-15 16:04:32.914988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.915010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.208 [2024-07-15 16:04:32.915026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.915047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:112448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.208 [2024-07-15 16:04:32.915063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.915084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:112480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.208 [2024-07-15 16:04:32.915099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.915121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.208 [2024-07-15 16:04:32.915136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.915157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:112544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.208 [2024-07-15 16:04:32.915172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.915193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.208 [2024-07-15 16:04:32.915208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:42.208 [2024-07-15 16:04:32.915229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.208 [2024-07-15 16:04:32.915253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.915716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.209 [2024-07-15 16:04:32.915744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.915770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.209 [2024-07-15 16:04:32.915787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.915809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.209 [2024-07-15 16:04:32.915824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.915846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.209 [2024-07-15 16:04:32.915861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.915882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.209 [2024-07-15 16:04:32.915897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.915919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.209 [2024-07-15 16:04:32.915933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.915969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:112776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.209 [2024-07-15 16:04:32.915988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.916012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:112808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.209 [2024-07-15 16:04:32.916027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.916049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.209 [2024-07-15 16:04:32.916065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.916086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:112872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.209 [2024-07-15 16:04:32.916102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.916123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:113160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.209 [2024-07-15 16:04:32.916138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.916159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.209 [2024-07-15 16:04:32.916174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.916207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.209 [2024-07-15 16:04:32.916224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.916245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:112648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.209 [2024-07-15 16:04:32.916260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.916282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:112680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.209 [2024-07-15 16:04:32.916297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.916318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.209 [2024-07-15 16:04:32.916333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.916354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.209 [2024-07-15 16:04:32.916369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.916392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.209 [2024-07-15 16:04:32.916407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.916428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.209 [2024-07-15 16:04:32.916443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.916465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.209 [2024-07-15 16:04:32.916480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.916501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.209 [2024-07-15 16:04:32.916515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.916537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:112816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.209 [2024-07-15 16:04:32.916551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.916573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.209 [2024-07-15 16:04:32.916588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.916609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.209 [2024-07-15 16:04:32.916624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.916653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.209 [2024-07-15 16:04:32.916668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.916690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.209 [2024-07-15 16:04:32.916705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.916726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:112896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.209 [2024-07-15 16:04:32.916741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.916762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:112928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.209 [2024-07-15 16:04:32.916777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.916798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:112960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.209 [2024-07-15 16:04:32.916812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.916834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:112992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.209 [2024-07-15 16:04:32.916849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.917597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:112904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.209 [2024-07-15 16:04:32.917625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.917652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.209 [2024-07-15 16:04:32.917669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.917691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:112968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.209 [2024-07-15 16:04:32.917707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.917729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:113216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.209 [2024-07-15 16:04:32.917744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.917766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.209 [2024-07-15 16:04:32.917780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.917802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.209 [2024-07-15 16:04:32.917817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.917866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.209 [2024-07-15 16:04:32.917883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.917931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.209 [2024-07-15 16:04:32.917949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.917970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.209 [2024-07-15 16:04:32.918001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.918023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:113096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.209 [2024-07-15 16:04:32.918039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.918060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:113024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.209 [2024-07-15 16:04:32.918075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:42.209 [2024-07-15 16:04:32.918096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:113056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.210 [2024-07-15 16:04:32.918121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:42.210 [2024-07-15 16:04:32.918142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:113088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.210 [2024-07-15 16:04:32.918156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:42.210 [2024-07-15 16:04:32.918180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.210 [2024-07-15 16:04:32.918195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:42.210 [2024-07-15 16:04:32.918216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:112536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.210 [2024-07-15 16:04:32.918231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:42.210 [2024-07-15 16:04:32.918253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:112600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.210 [2024-07-15 16:04:32.918283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:42.210 [2024-07-15 16:04:32.918313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.210 [2024-07-15 16:04:32.918329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:42.210 [2024-07-15 16:04:32.918350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:112384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.210 [2024-07-15 16:04:32.918364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:42.210 [2024-07-15 16:04:32.918385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.210 [2024-07-15 16:04:32.918408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:42.210 [2024-07-15 16:04:32.918430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.210 [2024-07-15 16:04:32.918445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:42.210 [2024-07-15 16:04:32.918476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:112544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.210 [2024-07-15 16:04:32.918490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:42.210 [2024-07-15 16:04:32.918512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.210 [2024-07-15 16:04:32.918527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:42.210 [2024-07-15 16:04:32.919896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.210 [2024-07-15 16:04:32.919923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:42.210 [2024-07-15 16:04:32.919949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:112672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.210 [2024-07-15 16:04:32.919966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:42.210 [2024-07-15 16:04:32.920005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:112744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.210 [2024-07-15 16:04:32.920021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:42.210 [2024-07-15 16:04:32.920043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:112808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.210 [2024-07-15 16:04:32.920057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:42.210 [2024-07-15 16:04:32.920078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:112872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.210 [2024-07-15 16:04:32.920093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:42.210 [2024-07-15 16:04:32.920131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.210 [2024-07-15 16:04:32.920146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.210 [2024-07-15 16:04:32.920168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:112648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.210 [2024-07-15 16:04:32.920183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:42.210 [2024-07-15 16:04:32.920204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:112208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.210 [2024-07-15 16:04:32.920219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:42.210 [2024-07-15 16:04:32.920240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:112248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.210 [2024-07-15 16:04:32.920267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:42.210 [2024-07-15 16:04:32.920290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.210 [2024-07-15 16:04:32.920306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:42.210 [2024-07-15 16:04:32.920332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:112816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.210 [2024-07-15 16:04:32.920349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:42.210 [2024-07-15 16:04:32.920370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.210 [2024-07-15 16:04:32.920385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:42.210 [2024-07-15 16:04:32.920406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.210 [2024-07-15 16:04:32.920422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:42.210 [2024-07-15 16:04:32.920443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.210 [2024-07-15 16:04:32.920458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:42.210 [2024-07-15 16:04:32.920494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:112992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.210 [2024-07-15 16:04:32.920508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:42.210 [2024-07-15 16:04:32.920529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.210 [2024-07-15 16:04:32.920551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:42.210 [2024-07-15 16:04:32.920573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.210 [2024-07-15 16:04:32.920603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:42.210 [2024-07-15 16:04:32.920624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.210 [2024-07-15 16:04:32.920639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:42.210 [2024-07-15 16:04:32.920661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.210 [2024-07-15 16:04:32.920675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:42.210 [2024-07-15 16:04:32.920696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:113216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.210 [2024-07-15 16:04:32.920711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:42.210 [2024-07-15 16:04:32.920732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.210 [2024-07-15 16:04:32.920747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:42.210 [2024-07-15 16:04:32.920777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.211 [2024-07-15 16:04:32.920793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:42.211 [2024-07-15 16:04:32.920814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.211 [2024-07-15 16:04:32.920829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:42.211 [2024-07-15 16:04:32.920850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:113056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.211 [2024-07-15 16:04:32.920865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:42.211 [2024-07-15 16:04:32.920887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:112472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.211 [2024-07-15 16:04:32.920902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:42.211 [2024-07-15 16:04:32.920923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:112600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.211 [2024-07-15 16:04:32.920938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:42.211 [2024-07-15 16:04:32.920965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.211 [2024-07-15 16:04:32.920980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:42.211 [2024-07-15 16:04:32.921013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:112480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.211 [2024-07-15 16:04:32.921030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:42.211 [2024-07-15 16:04:32.921053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.211 [2024-07-15 16:04:32.921069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:42.211 [2024-07-15 16:04:32.922465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:113168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.211 [2024-07-15 16:04:32.922493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:42.211 [2024-07-15 16:04:32.922520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:113192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.211 [2024-07-15 16:04:32.922537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:42.211 [2024-07-15 16:04:32.922558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.211 [2024-07-15 16:04:32.922574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:42.211 [2024-07-15 16:04:32.922594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:113320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.211 [2024-07-15 16:04:32.922609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:42.211 [2024-07-15 16:04:32.922642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.211 [2024-07-15 16:04:32.922658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:42.211 [2024-07-15 16:04:32.922679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.211 [2024-07-15 16:04:32.922694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:42.211 [2024-07-15 16:04:32.922714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.211 [2024-07-15 16:04:32.922729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:42.211 [2024-07-15 16:04:32.922749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:113384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.211 [2024-07-15 16:04:32.922764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:42.211 [2024-07-15 16:04:32.922784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:113400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.211 [2024-07-15 16:04:32.922815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.211 [2024-07-15 16:04:32.923216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.211 [2024-07-15 16:04:32.923243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:42.211 [2024-07-15 16:04:32.923270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.211 [2024-07-15 16:04:32.923286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:42.211 [2024-07-15 16:04:32.923308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.211 [2024-07-15 16:04:32.923323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:42.211 [2024-07-15 16:04:32.923344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:112736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.211 [2024-07-15 16:04:32.923359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:42.211 [2024-07-15 16:04:32.923381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:112800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.211 [2024-07-15 16:04:32.923396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:42.211 [2024-07-15 16:04:32.923417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.211 [2024-07-15 16:04:32.923431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:42.211 [2024-07-15 16:04:32.923453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.211 [2024-07-15 16:04:32.923467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:42.211 [2024-07-15 16:04:32.923500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:112808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.211 [2024-07-15 16:04:32.923517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:42.211 [2024-07-15 16:04:32.923538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.211 [2024-07-15 16:04:32.923553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:42.211 [2024-07-15 16:04:32.923574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:112208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.211 [2024-07-15 16:04:32.923589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:42.211 [2024-07-15 16:04:32.923610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.211 [2024-07-15 16:04:32.923625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:42.211 [2024-07-15 16:04:32.923647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.211 [2024-07-15 16:04:32.923661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:42.211 [2024-07-15 16:04:32.923683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:112928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.211 [2024-07-15 16:04:32.923697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:42.211 [2024-07-15 16:04:32.923719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.211 [2024-07-15 16:04:32.923734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:42.211 [2024-07-15 16:04:32.923755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.212 [2024-07-15 16:04:32.923769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.923790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:113216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.212 [2024-07-15 16:04:32.923805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.923826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.212 [2024-07-15 16:04:32.923841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.923862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.212 [2024-07-15 16:04:32.923877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.923898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:112600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.212 [2024-07-15 16:04:32.923913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.923934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:112480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.212 [2024-07-15 16:04:32.923984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.924028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.212 [2024-07-15 16:04:32.924043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.924065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.212 [2024-07-15 16:04:32.924079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.924100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.212 [2024-07-15 16:04:32.924115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.924136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.212 [2024-07-15 16:04:32.924151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.924172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:113472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.212 [2024-07-15 16:04:32.924187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.924207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:113488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.212 [2024-07-15 16:04:32.924222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.924243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:113504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.212 [2024-07-15 16:04:32.924257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.924279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:113240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.212 [2024-07-15 16:04:32.924294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.924889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.212 [2024-07-15 16:04:32.924915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.924940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.212 [2024-07-15 16:04:32.924973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.924995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:113192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.212 [2024-07-15 16:04:32.925024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.925048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:113320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.212 [2024-07-15 16:04:32.925082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.925105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.212 [2024-07-15 16:04:32.925120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.925141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.212 [2024-07-15 16:04:32.925156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.925799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.212 [2024-07-15 16:04:32.925825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.925850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.212 [2024-07-15 16:04:32.925866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.925887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.212 [2024-07-15 16:04:32.925931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.925954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.212 [2024-07-15 16:04:32.925969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.926005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.212 [2024-07-15 16:04:32.926021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.926042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:112736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.212 [2024-07-15 16:04:32.926057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.926078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:112864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.212 [2024-07-15 16:04:32.926092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.926113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:112808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.212 [2024-07-15 16:04:32.926127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.926149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.212 [2024-07-15 16:04:32.926163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.926184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.212 [2024-07-15 16:04:32.926205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.926242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:113272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.212 [2024-07-15 16:04:32.926259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.926280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:113216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.212 [2024-07-15 16:04:32.926295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.926339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.212 [2024-07-15 16:04:32.926354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.926374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:112480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.212 [2024-07-15 16:04:32.926388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.926409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.212 [2024-07-15 16:04:32.926423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.926444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.212 [2024-07-15 16:04:32.926458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.926478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:113488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.212 [2024-07-15 16:04:32.926493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.926513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.212 [2024-07-15 16:04:32.926527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.926548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.212 [2024-07-15 16:04:32.926562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.926583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.212 [2024-07-15 16:04:32.926597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.926617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.212 [2024-07-15 16:04:32.926631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.926651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.212 [2024-07-15 16:04:32.926665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.926693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:113072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.212 [2024-07-15 16:04:32.926709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.926729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.212 [2024-07-15 16:04:32.926744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.926764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:113384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.212 [2024-07-15 16:04:32.926779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:42.212 [2024-07-15 16:04:32.928974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.213 [2024-07-15 16:04:32.929181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.929223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.213 [2024-07-15 16:04:32.929241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.929263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.213 [2024-07-15 16:04:32.929278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.929300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:113424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.213 [2024-07-15 16:04:32.929315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.929336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.213 [2024-07-15 16:04:32.929366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.929387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.213 [2024-07-15 16:04:32.929401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.929422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:113248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.213 [2024-07-15 16:04:32.929436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.929456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.213 [2024-07-15 16:04:32.929471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.929491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.213 [2024-07-15 16:04:32.929506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.929554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:112736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.213 [2024-07-15 16:04:32.929571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.929591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:112808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.213 [2024-07-15 16:04:32.929605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.929625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.213 [2024-07-15 16:04:32.929640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.929660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:113216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.213 [2024-07-15 16:04:32.929674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.929693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.213 [2024-07-15 16:04:32.929707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.929727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.213 [2024-07-15 16:04:32.929741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.929761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:113240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.213 [2024-07-15 16:04:32.929775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.929796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.213 [2024-07-15 16:04:32.929810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.929830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.213 [2024-07-15 16:04:32.929844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.929864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:113320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.213 [2024-07-15 16:04:32.929878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.931790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.213 [2024-07-15 16:04:32.931818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.931844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:113528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.213 [2024-07-15 16:04:32.931859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.931879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:113544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.213 [2024-07-15 16:04:32.931919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.931942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:113560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.213 [2024-07-15 16:04:32.931957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.931994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:113576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.213 [2024-07-15 16:04:32.932025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.932048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:113592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.213 [2024-07-15 16:04:32.932063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.932084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:113608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.213 [2024-07-15 16:04:32.932099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.932120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:113624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.213 [2024-07-15 16:04:32.932134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.932155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.213 [2024-07-15 16:04:32.932170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.932191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.213 [2024-07-15 16:04:32.932205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.932226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:113672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.213 [2024-07-15 16:04:32.932241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.932262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.213 [2024-07-15 16:04:32.932277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.932298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.213 [2024-07-15 16:04:32.932312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.932333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:113720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.213 [2024-07-15 16:04:32.932348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.932397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.213 [2024-07-15 16:04:32.932420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.932441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:113752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.213 [2024-07-15 16:04:32.932455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.932474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:113768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.213 [2024-07-15 16:04:32.932488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.932508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:113784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.213 [2024-07-15 16:04:32.932522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.932541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.213 [2024-07-15 16:04:32.932555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.932575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:113448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.213 [2024-07-15 16:04:32.932588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.932608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.213 [2024-07-15 16:04:32.932622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.932642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.213 [2024-07-15 16:04:32.932655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.932675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.213 [2024-07-15 16:04:32.932688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.932708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.213 [2024-07-15 16:04:32.932722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.932741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.213 [2024-07-15 16:04:32.932755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.932774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.213 [2024-07-15 16:04:32.932788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.932807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.213 [2024-07-15 16:04:32.932821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.932848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.213 [2024-07-15 16:04:32.932863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.932883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.213 [2024-07-15 16:04:32.932897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.932923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:112480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.213 [2024-07-15 16:04:32.932938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.932974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:113240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.213 [2024-07-15 16:04:32.933005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.933038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.213 [2024-07-15 16:04:32.933054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.933076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:113416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.213 [2024-07-15 16:04:32.933090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.933112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:112752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.213 [2024-07-15 16:04:32.933127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.934092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:113440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.213 [2024-07-15 16:04:32.934119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.934146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:113504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.213 [2024-07-15 16:04:32.934163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.934185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.213 [2024-07-15 16:04:32.934200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.934222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.213 [2024-07-15 16:04:32.934236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.934258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.213 [2024-07-15 16:04:32.934272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.934321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:113864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.213 [2024-07-15 16:04:32.934337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.934382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:113880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.213 [2024-07-15 16:04:32.934398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:42.213 [2024-07-15 16:04:32.934420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:113896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.213 [2024-07-15 16:04:32.934435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.934456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:113912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.214 [2024-07-15 16:04:32.934471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.934492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:113928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.214 [2024-07-15 16:04:32.934507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.934528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:113944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.214 [2024-07-15 16:04:32.934543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.934566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:113352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.214 [2024-07-15 16:04:32.934581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.937043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.214 [2024-07-15 16:04:32.937073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.937101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.214 [2024-07-15 16:04:32.937118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.937140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:113592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.214 [2024-07-15 16:04:32.937154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.937175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:113624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.214 [2024-07-15 16:04:32.937190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.937212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.214 [2024-07-15 16:04:32.937226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.937260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.214 [2024-07-15 16:04:32.937277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.937298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.214 [2024-07-15 16:04:32.937313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.937334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:113752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.214 [2024-07-15 16:04:32.937364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.937383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.214 [2024-07-15 16:04:32.937397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.937417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.214 [2024-07-15 16:04:32.937431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.937451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.214 [2024-07-15 16:04:32.937465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.937484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.214 [2024-07-15 16:04:32.937498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.937518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.214 [2024-07-15 16:04:32.937532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.937552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:112736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.214 [2024-07-15 16:04:32.937566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.937586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:112480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.214 [2024-07-15 16:04:32.937601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.937621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.214 [2024-07-15 16:04:32.937635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.937654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:112752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.214 [2024-07-15 16:04:32.937668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.937688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.214 [2024-07-15 16:04:32.937710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.937731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.214 [2024-07-15 16:04:32.937745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.937765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.214 [2024-07-15 16:04:32.937779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.937798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:113568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.214 [2024-07-15 16:04:32.937812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.937832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.214 [2024-07-15 16:04:32.937845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.937865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.214 [2024-07-15 16:04:32.937879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.937926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.214 [2024-07-15 16:04:32.937943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.937965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:113696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.214 [2024-07-15 16:04:32.937993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.938016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:113728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.214 [2024-07-15 16:04:32.938032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.938053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.214 [2024-07-15 16:04:32.938068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.938089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.214 [2024-07-15 16:04:32.938104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.938124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:113504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.214 [2024-07-15 16:04:32.938139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.938161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.214 [2024-07-15 16:04:32.938185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.938207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:113864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.214 [2024-07-15 16:04:32.938222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.938244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:113896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.214 [2024-07-15 16:04:32.938259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.938280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:113928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.214 [2024-07-15 16:04:32.938310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.938330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.214 [2024-07-15 16:04:32.938344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.940343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:113456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.214 [2024-07-15 16:04:32.940387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.940413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.214 [2024-07-15 16:04:32.940429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.940449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.214 [2024-07-15 16:04:32.940464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.940484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:113992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.214 [2024-07-15 16:04:32.940498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.940518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.214 [2024-07-15 16:04:32.940532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.940551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.214 [2024-07-15 16:04:32.940565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.940585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:114040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.214 [2024-07-15 16:04:32.940599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.940619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.214 [2024-07-15 16:04:32.940644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.940667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.214 [2024-07-15 16:04:32.940681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.940701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.214 [2024-07-15 16:04:32.940715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.940734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:114104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.214 [2024-07-15 16:04:32.940749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.940769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:114120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.214 [2024-07-15 16:04:32.940783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:42.214 [2024-07-15 16:04:32.940802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:114136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.214 [2024-07-15 16:04:32.940816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.940836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.215 [2024-07-15 16:04:32.940850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.940870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.215 [2024-07-15 16:04:32.940884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.940904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.215 [2024-07-15 16:04:32.940917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.941420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.215 [2024-07-15 16:04:32.941445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.941470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:113808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.215 [2024-07-15 16:04:32.941485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.941505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.215 [2024-07-15 16:04:32.941520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.941541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.215 [2024-07-15 16:04:32.941555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.941586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:113904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.215 [2024-07-15 16:04:32.941602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.941622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:113936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.215 [2024-07-15 16:04:32.941636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.941656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:113512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.215 [2024-07-15 16:04:32.941670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.941690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:113576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.215 [2024-07-15 16:04:32.941703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.941723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:113640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.215 [2024-07-15 16:04:32.941737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.941757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.215 [2024-07-15 16:04:32.941771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.941790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:113768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.215 [2024-07-15 16:04:32.941804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.941825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:113560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.215 [2024-07-15 16:04:32.941839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.941859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:113624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.215 [2024-07-15 16:04:32.941873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.941893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.215 [2024-07-15 16:04:32.941937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.941965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:113752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.215 [2024-07-15 16:04:32.941994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.942017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:113448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.215 [2024-07-15 16:04:32.942033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.942063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.215 [2024-07-15 16:04:32.942079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.942101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:112736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.215 [2024-07-15 16:04:32.942116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.942137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.215 [2024-07-15 16:04:32.942152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.942173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.215 [2024-07-15 16:04:32.942187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.942208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:113536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.215 [2024-07-15 16:04:32.942223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.942244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.215 [2024-07-15 16:04:32.942259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.942280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:113664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.215 [2024-07-15 16:04:32.942295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.942330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.215 [2024-07-15 16:04:32.942344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.942364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:113792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.215 [2024-07-15 16:04:32.942378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.942398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.215 [2024-07-15 16:04:32.942412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.942432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:113896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.215 [2024-07-15 16:04:32.942446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.942466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.215 [2024-07-15 16:04:32.942480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.942500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.215 [2024-07-15 16:04:32.942521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.942542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:114232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.215 [2024-07-15 16:04:32.942556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.942576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:114248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.215 [2024-07-15 16:04:32.942605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.942626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.215 [2024-07-15 16:04:32.942640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.942661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:114280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.215 [2024-07-15 16:04:32.942675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.942696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:114296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.215 [2024-07-15 16:04:32.942710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.942730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:114312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.215 [2024-07-15 16:04:32.942745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.942766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.215 [2024-07-15 16:04:32.942780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.944024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:113848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.215 [2024-07-15 16:04:32.944065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.944095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:113912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.215 [2024-07-15 16:04:32.944112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.944133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:114336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.215 [2024-07-15 16:04:32.944159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.944180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:114352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.215 [2024-07-15 16:04:32.944195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.944216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:114368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.215 [2024-07-15 16:04:32.944243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.944266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:113968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.215 [2024-07-15 16:04:32.944281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.944302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:114000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.215 [2024-07-15 16:04:32.944317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.944338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:114032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.215 [2024-07-15 16:04:32.944368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.944388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:114064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.215 [2024-07-15 16:04:32.944402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.944422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:113960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.215 [2024-07-15 16:04:32.944436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.944456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:113992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.215 [2024-07-15 16:04:32.944470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.944490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.215 [2024-07-15 16:04:32.944504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.944524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.215 [2024-07-15 16:04:32.944538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.944557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.215 [2024-07-15 16:04:32.944571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.944591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:114120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.215 [2024-07-15 16:04:32.944605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.944624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.215 [2024-07-15 16:04:32.944638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.944658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.215 [2024-07-15 16:04:32.944672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.945214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.215 [2024-07-15 16:04:32.945251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.945277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:114128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.215 [2024-07-15 16:04:32.945294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.945316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:114160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.215 [2024-07-15 16:04:32.945332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.945367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:114192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.215 [2024-07-15 16:04:32.945381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:42.215 [2024-07-15 16:04:32.945401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.215 [2024-07-15 16:04:32.945415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.945434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.216 [2024-07-15 16:04:32.945448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.945468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.216 [2024-07-15 16:04:32.945482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.945502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.216 [2024-07-15 16:04:32.945516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.945536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.216 [2024-07-15 16:04:32.945550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.945570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:113560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.216 [2024-07-15 16:04:32.945584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.945603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.216 [2024-07-15 16:04:32.945617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.945637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:113448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.216 [2024-07-15 16:04:32.945651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.945682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.216 [2024-07-15 16:04:32.945698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.945718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.216 [2024-07-15 16:04:32.945732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.945752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.216 [2024-07-15 16:04:32.945765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.945785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:113728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.216 [2024-07-15 16:04:32.945799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.945819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.216 [2024-07-15 16:04:32.945833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.945852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:113352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.216 [2024-07-15 16:04:32.945866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.945886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:114232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.216 [2024-07-15 16:04:32.945927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.945955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.216 [2024-07-15 16:04:32.945970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.946006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:114296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.216 [2024-07-15 16:04:32.946022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.946043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.216 [2024-07-15 16:04:32.946058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.946631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:114392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.216 [2024-07-15 16:04:32.946673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.946699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:114408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.216 [2024-07-15 16:04:32.946715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.946747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:114424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.216 [2024-07-15 16:04:32.946763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.946785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.216 [2024-07-15 16:04:32.946799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.946820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:114456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.216 [2024-07-15 16:04:32.946834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.946855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:114472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.216 [2024-07-15 16:04:32.946869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.946890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:114488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.216 [2024-07-15 16:04:32.946904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.946924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.216 [2024-07-15 16:04:32.946939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.946959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:114352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.216 [2024-07-15 16:04:32.946990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.947011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:113968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.216 [2024-07-15 16:04:32.947052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.947075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:114032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.216 [2024-07-15 16:04:32.947091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.947112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:113960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.216 [2024-07-15 16:04:32.947127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.947148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.216 [2024-07-15 16:04:32.947163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.947184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.216 [2024-07-15 16:04:32.947198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.947220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.216 [2024-07-15 16:04:32.947244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.947782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.216 [2024-07-15 16:04:32.947807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.947832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:113656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.216 [2024-07-15 16:04:32.947848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.947869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.216 [2024-07-15 16:04:32.947883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.947903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:114128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.216 [2024-07-15 16:04:32.947917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.947937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:114192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.216 [2024-07-15 16:04:32.947951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.947987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.216 [2024-07-15 16:04:32.948034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.948059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.216 [2024-07-15 16:04:32.948075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.948097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:113560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.216 [2024-07-15 16:04:32.948111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.948133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.216 [2024-07-15 16:04:32.948147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.948169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.216 [2024-07-15 16:04:32.948183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.948204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.216 [2024-07-15 16:04:32.948219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.948240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.216 [2024-07-15 16:04:32.948266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.948289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.216 [2024-07-15 16:04:32.948304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.948326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.216 [2024-07-15 16:04:32.948356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.948742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.216 [2024-07-15 16:04:32.948767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.948791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:114240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.216 [2024-07-15 16:04:32.948807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.948828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:114272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.216 [2024-07-15 16:04:32.948842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.948861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:114304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.216 [2024-07-15 16:04:32.948876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.948896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:114408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.216 [2024-07-15 16:04:32.948910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.948930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.216 [2024-07-15 16:04:32.948944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.948996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:114472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.216 [2024-07-15 16:04:32.949011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.949050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.216 [2024-07-15 16:04:32.949066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.949088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:113968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.216 [2024-07-15 16:04:32.949102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.949124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:113960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.216 [2024-07-15 16:04:32.949149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.949173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.216 [2024-07-15 16:04:32.949188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.949781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.216 [2024-07-15 16:04:32.949807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:42.216 [2024-07-15 16:04:32.949832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:114376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.217 [2024-07-15 16:04:32.949847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.949867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:114008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.217 [2024-07-15 16:04:32.949881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.949928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:114072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.217 [2024-07-15 16:04:32.949955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.949988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:114136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.217 [2024-07-15 16:04:32.950006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.950028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:113656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.217 [2024-07-15 16:04:32.950043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.950065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:114128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.217 [2024-07-15 16:04:32.950080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.950101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.217 [2024-07-15 16:04:32.950115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.950136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:113560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.217 [2024-07-15 16:04:32.950151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.950172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.217 [2024-07-15 16:04:32.950187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.950208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.217 [2024-07-15 16:04:32.950222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.950270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.217 [2024-07-15 16:04:32.950286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.950321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.217 [2024-07-15 16:04:32.950335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.950354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:114240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.217 [2024-07-15 16:04:32.950368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.950388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:114304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.217 [2024-07-15 16:04:32.950402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.950422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:114440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.217 [2024-07-15 16:04:32.950436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.950455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.217 [2024-07-15 16:04:32.950469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.950490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.217 [2024-07-15 16:04:32.950504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.953723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:113896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.217 [2024-07-15 16:04:32.953751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.953776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:114248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.217 [2024-07-15 16:04:32.953792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.953814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:114312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.217 [2024-07-15 16:04:32.953828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.953848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:114400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.217 [2024-07-15 16:04:32.953862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.953882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:114432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.217 [2024-07-15 16:04:32.953906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.953969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:114464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.217 [2024-07-15 16:04:32.954000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.954023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:114336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.217 [2024-07-15 16:04:32.954038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.954059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:113992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.217 [2024-07-15 16:04:32.954074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.954096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:114120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.217 [2024-07-15 16:04:32.954110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.954131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:114376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.217 [2024-07-15 16:04:32.954146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.954167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:114072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.217 [2024-07-15 16:04:32.954182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.954203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:113656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.217 [2024-07-15 16:04:32.954217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.954253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.217 [2024-07-15 16:04:32.954268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.954303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.217 [2024-07-15 16:04:32.954317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.954337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.217 [2024-07-15 16:04:32.954350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.954371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:114240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.217 [2024-07-15 16:04:32.954385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.954405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:114440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.217 [2024-07-15 16:04:32.954419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.954439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:113960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.217 [2024-07-15 16:04:32.954461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.954483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.217 [2024-07-15 16:04:32.954497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.954518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:114296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.217 [2024-07-15 16:04:32.954532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.954552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:114504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.217 [2024-07-15 16:04:32.954566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.954586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:114520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.217 [2024-07-15 16:04:32.954599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.954619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.217 [2024-07-15 16:04:32.954633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.954668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:114552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.217 [2024-07-15 16:04:32.954682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.954703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:114568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.217 [2024-07-15 16:04:32.954717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.954737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:114584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.217 [2024-07-15 16:04:32.954751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.954772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.217 [2024-07-15 16:04:32.954787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.955281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:114616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.217 [2024-07-15 16:04:32.955308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.955349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:114632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.217 [2024-07-15 16:04:32.955380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.955400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:114648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.217 [2024-07-15 16:04:32.955425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.955448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:114664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.217 [2024-07-15 16:04:32.955462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.955482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:114680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.217 [2024-07-15 16:04:32.955496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.955515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:114696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.217 [2024-07-15 16:04:32.955530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:42.217 [2024-07-15 16:04:32.955550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:114712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.217 [2024-07-15 16:04:32.955564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.955583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:114728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-07-15 16:04:32.955597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.955617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-07-15 16:04:32.955631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.955651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:114760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-07-15 16:04:32.955665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.955685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:114776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-07-15 16:04:32.955698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.955718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:114792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-07-15 16:04:32.955732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.955752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-07-15 16:04:32.955766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.955785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:114824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-07-15 16:04:32.955799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.955819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:114840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-07-15 16:04:32.955840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.955861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:114856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-07-15 16:04:32.955876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.956573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:114424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.218 [2024-07-15 16:04:32.956599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.956624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:114488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.218 [2024-07-15 16:04:32.956639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.956660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:114024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.218 [2024-07-15 16:04:32.956674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.956694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:114864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-07-15 16:04:32.956708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.956729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:114880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-07-15 16:04:32.956743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.956763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:114896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-07-15 16:04:32.956777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.956797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:114912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-07-15 16:04:32.956811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.956831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:114928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-07-15 16:04:32.956844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.956864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.218 [2024-07-15 16:04:32.956878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.956898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:114400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.218 [2024-07-15 16:04:32.956912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.956931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:114464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.218 [2024-07-15 16:04:32.956945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.957024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.218 [2024-07-15 16:04:32.957043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.957065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:114376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.218 [2024-07-15 16:04:32.957079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.957101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:113656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.218 [2024-07-15 16:04:32.957115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.957136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.218 [2024-07-15 16:04:32.957151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.957172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:114240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.218 [2024-07-15 16:04:32.957186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.957211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:113960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-07-15 16:04:32.957226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.957247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:114296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.218 [2024-07-15 16:04:32.957261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.957282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:114520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-07-15 16:04:32.957297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.957318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:114552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-07-15 16:04:32.957363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.957383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:114584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-07-15 16:04:32.957397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.957417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:114632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-07-15 16:04:32.957431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.957451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:114664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-07-15 16:04:32.957481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.957508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:114696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-07-15 16:04:32.957523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.957543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-07-15 16:04:32.957557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.957576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:114760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-07-15 16:04:32.957589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.957608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:114792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-07-15 16:04:32.957622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.957642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:114824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-07-15 16:04:32.957656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.957675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-07-15 16:04:32.957689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.959718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:114936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-07-15 16:04:32.959745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.959770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:114952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-07-15 16:04:32.959786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.959806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:114968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-07-15 16:04:32.959820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.959839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:114408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.218 [2024-07-15 16:04:32.959853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.959872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:114088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.218 [2024-07-15 16:04:32.959885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.959905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:114488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.218 [2024-07-15 16:04:32.959918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.959938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:114864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-07-15 16:04:32.959963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.960023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:114896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-07-15 16:04:32.960059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.960081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:114928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-07-15 16:04:32.960096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.960126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.218 [2024-07-15 16:04:32.960141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.960162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:113992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.218 [2024-07-15 16:04:32.960176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.960197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.218 [2024-07-15 16:04:32.960211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.960231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:114240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.218 [2024-07-15 16:04:32.960245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.960266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:114296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.218 [2024-07-15 16:04:32.960280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.960300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:114552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-07-15 16:04:32.960314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.960349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:114632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-07-15 16:04:32.960363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.960383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:114696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-07-15 16:04:32.960411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.960431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:114760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-07-15 16:04:32.960444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.960463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:114824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-07-15 16:04:32.960485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.960506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.218 [2024-07-15 16:04:32.960520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.960539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:114992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-07-15 16:04:32.960552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:42.218 [2024-07-15 16:04:32.960572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:115008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-07-15 16:04:32.960585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.960605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:115024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-07-15 16:04:32.960618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.960638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:115040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-07-15 16:04:32.960651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.960671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:115056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-07-15 16:04:32.960684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.960703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:115072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-07-15 16:04:32.960717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.960736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:115088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-07-15 16:04:32.960750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.960769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:115104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-07-15 16:04:32.960783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.963721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:114512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-07-15 16:04:32.963757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.963785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:114544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-07-15 16:04:32.963801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.963822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:114576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-07-15 16:04:32.963836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.963871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:114608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-07-15 16:04:32.963887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.963907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:114640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-07-15 16:04:32.963921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.963940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:114672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-07-15 16:04:32.963996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.964055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:114704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-07-15 16:04:32.964071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.964093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:114736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-07-15 16:04:32.964108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.964129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:114768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-07-15 16:04:32.964144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.964165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:114800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-07-15 16:04:32.964180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.964201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:114832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-07-15 16:04:32.964216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.964237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:114872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-07-15 16:04:32.964252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.964273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:114904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-07-15 16:04:32.964288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.964309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:114952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-07-15 16:04:32.964323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.964345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:114408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-07-15 16:04:32.964359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.964419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-07-15 16:04:32.964434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.964453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:114896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-07-15 16:04:32.964467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.964487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:114400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-07-15 16:04:32.964501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.964522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-07-15 16:04:32.964536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.964556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:114296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-07-15 16:04:32.964570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.964589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:114632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-07-15 16:04:32.964603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.964623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:114760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-07-15 16:04:32.964637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.964657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-07-15 16:04:32.964671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.964691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:115008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-07-15 16:04:32.964705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.964725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:115040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-07-15 16:04:32.964738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.964758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:115072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-07-15 16:04:32.964772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.964792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:115104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-07-15 16:04:32.964806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.964833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:114440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-07-15 16:04:32.964848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.964869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:115112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-07-15 16:04:32.964882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.964902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:115128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-07-15 16:04:32.964916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.964936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:115144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-07-15 16:04:32.964966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.964986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:115160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-07-15 16:04:32.965001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.965021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:115176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-07-15 16:04:32.965049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.965072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:115192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-07-15 16:04:32.965087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.965108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:115208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-07-15 16:04:32.965123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.965144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:115224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-07-15 16:04:32.965158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.965179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:115240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-07-15 16:04:32.965193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.965214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:115256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-07-15 16:04:32.965229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.965249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:114568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-07-15 16:04:32.965263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.965287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:114616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-07-15 16:04:32.965309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.965331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:114680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-07-15 16:04:32.965361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.965381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:114744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-07-15 16:04:32.965395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.965415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:114808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-07-15 16:04:32.965428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.965448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:114944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-07-15 16:04:32.965462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.965482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:114976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-07-15 16:04:32.965496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.967883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:114912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-07-15 16:04:32.967921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.968034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:114520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-07-15 16:04:32.968056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.968079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:114664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-07-15 16:04:32.968095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.968116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:114792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-07-15 16:04:32.968131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.968152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:115264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-07-15 16:04:32.968167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.968188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:115280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-07-15 16:04:32.968203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:42.219 [2024-07-15 16:04:32.968224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.220 [2024-07-15 16:04:32.968252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:42.220 [2024-07-15 16:04:32.968275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:115312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.220 [2024-07-15 16:04:32.968290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:42.220 [2024-07-15 16:04:32.968311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.220 [2024-07-15 16:04:32.968326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:42.220 [2024-07-15 16:04:32.968348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:115344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.220 [2024-07-15 16:04:32.968377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:42.220 [2024-07-15 16:04:32.968412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:115360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.220 [2024-07-15 16:04:32.968426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:42.220 [2024-07-15 16:04:32.968446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:115376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.220 [2024-07-15 16:04:32.968460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:42.220 [2024-07-15 16:04:32.968480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:115392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.220 [2024-07-15 16:04:32.968493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:42.220 [2024-07-15 16:04:32.968513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:115408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.220 [2024-07-15 16:04:32.968527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:42.220 [2024-07-15 16:04:32.968547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:114984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.220 [2024-07-15 16:04:32.968561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:42.220 [2024-07-15 16:04:32.968580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:115016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.220 [2024-07-15 16:04:32.968594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:42.220 [2024-07-15 16:04:32.968614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:115048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.220 [2024-07-15 16:04:32.968628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:42.220 [2024-07-15 16:04:32.968648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:115080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.220 [2024-07-15 16:04:32.968662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:42.220 Received shutdown signal, test time was about 33.379154 seconds 00:18:42.220 00:18:42.220 Latency(us) 00:18:42.220 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.220 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:42.220 Verification LBA range: start 0x0 length 0x4000 00:18:42.220 Nvme0n1 : 33.38 8519.78 33.28 0.00 0.00 14994.39 441.25 4026531.84 00:18:42.220 =================================================================================================================== 00:18:42.220 Total : 8519.78 33.28 0.00 0.00 14994.39 441.25 4026531.84 00:18:42.220 16:04:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:42.478 16:04:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:18:42.478 16:04:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:42.478 16:04:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:18:42.478 16:04:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:42.478 16:04:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:18:42.478 16:04:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:42.478 16:04:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:18:42.478 16:04:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:42.478 16:04:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:42.478 rmmod nvme_tcp 00:18:42.478 rmmod nvme_fabrics 00:18:42.478 rmmod nvme_keyring 00:18:42.478 16:04:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:42.478 16:04:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:18:42.478 16:04:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:18:42.478 16:04:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 89655 ']' 00:18:42.478 16:04:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 89655 00:18:42.478 16:04:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 89655 ']' 00:18:42.478 16:04:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 89655 00:18:42.478 16:04:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:18:42.478 16:04:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:42.478 16:04:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89655 00:18:42.478 16:04:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:42.478 16:04:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:42.478 16:04:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89655' 00:18:42.478 killing process with pid 89655 00:18:42.478 16:04:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 89655 00:18:42.478 16:04:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 89655 00:18:42.735 16:04:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:42.735 16:04:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:42.735 16:04:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:42.735 16:04:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:42.735 16:04:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:42.735 16:04:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.735 16:04:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:42.735 16:04:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.735 16:04:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:42.992 ************************************ 00:18:42.992 END TEST nvmf_host_multipath_status 00:18:42.992 ************************************ 00:18:42.992 00:18:42.992 real 0m39.522s 00:18:42.992 user 2m8.983s 00:18:42.992 sys 0m9.877s 00:18:42.992 16:04:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:42.992 16:04:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:42.992 16:04:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:42.992 16:04:36 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:42.992 16:04:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:42.992 16:04:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:42.992 16:04:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:42.992 ************************************ 00:18:42.993 START TEST nvmf_discovery_remove_ifc 00:18:42.993 ************************************ 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:42.993 * Looking for test storage... 00:18:42.993 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:42.993 Cannot find device "nvmf_tgt_br" 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:42.993 Cannot find device "nvmf_tgt_br2" 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:42.993 Cannot find device "nvmf_tgt_br" 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:42.993 Cannot find device "nvmf_tgt_br2" 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:18:42.993 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:43.251 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:43.251 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:43.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:43.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:18:43.251 00:18:43.251 --- 10.0.0.2 ping statistics --- 00:18:43.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.251 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:43.251 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:43.251 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:18:43.251 00:18:43.251 --- 10.0.0.3 ping statistics --- 00:18:43.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.251 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:43.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:43.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:18:43.251 00:18:43.251 --- 10.0.0.1 ping statistics --- 00:18:43.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.251 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=91060 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 91060 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 91060 ']' 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:43.251 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:43.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.252 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.252 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:43.252 16:04:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:43.509 [2024-07-15 16:04:37.039836] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:18:43.509 [2024-07-15 16:04:37.040019] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:43.509 [2024-07-15 16:04:37.180016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.766 [2024-07-15 16:04:37.272672] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:43.767 [2024-07-15 16:04:37.272744] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:43.767 [2024-07-15 16:04:37.272770] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:43.767 [2024-07-15 16:04:37.272778] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:43.767 [2024-07-15 16:04:37.272785] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:43.767 [2024-07-15 16:04:37.272810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.341 16:04:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:44.341 16:04:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:18:44.341 16:04:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:44.341 16:04:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:44.341 16:04:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:44.341 16:04:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:44.341 16:04:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:18:44.341 16:04:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.341 16:04:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:44.341 [2024-07-15 16:04:37.989260] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:44.341 [2024-07-15 16:04:37.997399] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:18:44.341 null0 00:18:44.341 [2024-07-15 16:04:38.029319] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:44.341 16:04:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.341 16:04:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=91109 00:18:44.341 16:04:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:18:44.341 16:04:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 91109 /tmp/host.sock 00:18:44.341 16:04:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 91109 ']' 00:18:44.341 16:04:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:18:44.341 16:04:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:44.341 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:44.341 16:04:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:44.341 16:04:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:44.341 16:04:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:44.599 [2024-07-15 16:04:38.113758] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:18:44.599 [2024-07-15 16:04:38.113865] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91109 ] 00:18:44.599 [2024-07-15 16:04:38.255157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.857 [2024-07-15 16:04:38.358319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.423 16:04:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:45.423 16:04:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:18:45.423 16:04:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:45.423 16:04:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:18:45.423 16:04:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.423 16:04:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:45.423 16:04:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.423 16:04:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:18:45.423 16:04:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.423 16:04:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:45.681 16:04:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.681 16:04:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:18:45.681 16:04:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.681 16:04:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:46.638 [2024-07-15 16:04:40.252348] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:46.638 [2024-07-15 16:04:40.252382] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:46.638 [2024-07-15 16:04:40.252418] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:46.638 [2024-07-15 16:04:40.338524] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:18:46.896 [2024-07-15 16:04:40.395531] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:46.896 [2024-07-15 16:04:40.395612] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:46.896 [2024-07-15 16:04:40.395640] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:46.896 [2024-07-15 16:04:40.395657] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:46.896 [2024-07-15 16:04:40.395683] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:46.896 16:04:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.896 16:04:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:18:46.896 16:04:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:46.896 [2024-07-15 16:04:40.400862] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xd62660 was disconnected and freed. delete nvme_qpair. 00:18:46.896 16:04:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:46.896 16:04:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:46.896 16:04:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.896 16:04:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:46.896 16:04:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:46.896 16:04:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:46.896 16:04:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.896 16:04:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:18:46.896 16:04:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:18:46.896 16:04:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:18:46.896 16:04:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:18:46.896 16:04:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:46.896 16:04:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:46.896 16:04:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:46.896 16:04:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:46.896 16:04:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:46.896 16:04:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.896 16:04:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:46.896 16:04:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.896 16:04:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:46.896 16:04:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:47.831 16:04:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:47.831 16:04:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:47.831 16:04:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:47.831 16:04:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.831 16:04:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:47.831 16:04:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:47.831 16:04:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:48.090 16:04:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.090 16:04:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:48.090 16:04:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:49.023 16:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:49.023 16:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:49.023 16:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.023 16:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:49.023 16:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:49.023 16:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:49.023 16:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:49.023 16:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.023 16:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:49.023 16:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:49.958 16:04:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:49.958 16:04:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:49.958 16:04:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:49.958 16:04:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.958 16:04:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:49.958 16:04:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:49.958 16:04:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:50.216 16:04:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.216 16:04:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:50.216 16:04:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:51.149 16:04:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:51.149 16:04:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:51.149 16:04:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:51.149 16:04:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.149 16:04:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:51.149 16:04:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:51.149 16:04:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:51.149 16:04:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.149 16:04:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:51.149 16:04:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:52.085 16:04:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:52.085 16:04:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:52.085 16:04:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:52.085 16:04:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.085 16:04:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:52.085 16:04:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:52.085 16:04:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:52.085 16:04:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.343 [2024-07-15 16:04:45.824387] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:18:52.343 [2024-07-15 16:04:45.824465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:52.343 [2024-07-15 16:04:45.824480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.343 [2024-07-15 16:04:45.824493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:52.343 [2024-07-15 16:04:45.824502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.343 [2024-07-15 16:04:45.824512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:52.343 [2024-07-15 16:04:45.824520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.343 [2024-07-15 16:04:45.824529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:52.343 [2024-07-15 16:04:45.824538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.343 [2024-07-15 16:04:45.824547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:52.343 [2024-07-15 16:04:45.824556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.343 [2024-07-15 16:04:45.824564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2b920 is same with the state(5) to be set 00:18:52.343 [2024-07-15 16:04:45.834381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd2b920 (9): Bad file descriptor 00:18:52.343 16:04:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:52.343 16:04:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:52.343 [2024-07-15 16:04:45.844417] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:53.277 16:04:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:53.277 16:04:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:53.277 16:04:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.277 16:04:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:53.277 16:04:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:53.277 16:04:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:53.277 16:04:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:53.277 [2024-07-15 16:04:46.898021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:18:53.277 [2024-07-15 16:04:46.898267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd2b920 with addr=10.0.0.2, port=4420 00:18:53.277 [2024-07-15 16:04:46.898646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2b920 is same with the state(5) to be set 00:18:53.277 [2024-07-15 16:04:46.898732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd2b920 (9): Bad file descriptor 00:18:53.277 [2024-07-15 16:04:46.899555] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:53.277 [2024-07-15 16:04:46.899618] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:53.277 [2024-07-15 16:04:46.899639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:53.277 [2024-07-15 16:04:46.899658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:53.277 [2024-07-15 16:04:46.899700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:53.277 [2024-07-15 16:04:46.899721] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:53.277 16:04:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.277 16:04:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:53.277 16:04:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:54.211 [2024-07-15 16:04:47.899779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:54.211 [2024-07-15 16:04:47.899838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:54.211 [2024-07-15 16:04:47.899868] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:54.211 [2024-07-15 16:04:47.899878] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:18:54.211 [2024-07-15 16:04:47.899903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:54.211 [2024-07-15 16:04:47.899933] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:18:54.211 [2024-07-15 16:04:47.900040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:54.211 [2024-07-15 16:04:47.900058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.211 [2024-07-15 16:04:47.900072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:54.211 [2024-07-15 16:04:47.900082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.211 [2024-07-15 16:04:47.900093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:54.211 [2024-07-15 16:04:47.900102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.211 [2024-07-15 16:04:47.900113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:54.211 [2024-07-15 16:04:47.900122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.211 [2024-07-15 16:04:47.900132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:54.211 [2024-07-15 16:04:47.900149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.211 [2024-07-15 16:04:47.900159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:18:54.211 [2024-07-15 16:04:47.900198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcce3c0 (9): Bad file descriptor 00:18:54.211 [2024-07-15 16:04:47.901204] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:18:54.211 [2024-07-15 16:04:47.901225] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:18:54.211 16:04:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:54.211 16:04:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:54.211 16:04:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:54.211 16:04:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.211 16:04:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:54.211 16:04:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:54.211 16:04:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:54.470 16:04:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.470 16:04:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:18:54.470 16:04:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:54.470 16:04:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:54.470 16:04:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:18:54.470 16:04:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:54.470 16:04:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:54.470 16:04:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.470 16:04:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:54.470 16:04:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:54.470 16:04:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:54.470 16:04:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:54.470 16:04:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.470 16:04:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:54.470 16:04:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:55.403 16:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:55.403 16:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:55.403 16:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:55.403 16:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:55.404 16:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.404 16:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:55.404 16:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:55.404 16:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.404 16:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:55.404 16:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:56.339 [2024-07-15 16:04:49.904292] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:56.339 [2024-07-15 16:04:49.904332] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:56.339 [2024-07-15 16:04:49.904352] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:56.339 [2024-07-15 16:04:49.990488] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:18:56.339 [2024-07-15 16:04:50.046756] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:56.339 [2024-07-15 16:04:50.046813] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:56.339 [2024-07-15 16:04:50.046837] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:56.339 [2024-07-15 16:04:50.046854] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:18:56.339 [2024-07-15 16:04:50.046863] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:56.339 [2024-07-15 16:04:50.052918] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xd405c0 was disconnected and freed. delete nvme_qpair. 00:18:56.598 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:56.598 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:56.598 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.598 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:56.598 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:56.598 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:56.598 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:56.598 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.598 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:18:56.598 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:18:56.598 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 91109 00:18:56.598 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 91109 ']' 00:18:56.598 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 91109 00:18:56.598 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:18:56.598 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:56.598 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91109 00:18:56.598 killing process with pid 91109 00:18:56.598 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:56.598 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:56.598 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91109' 00:18:56.599 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 91109 00:18:56.599 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 91109 00:18:56.857 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:18:56.857 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:56.857 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:18:56.857 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:56.857 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:18:56.857 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:56.857 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:56.857 rmmod nvme_tcp 00:18:56.857 rmmod nvme_fabrics 00:18:56.857 rmmod nvme_keyring 00:18:56.857 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:56.857 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:18:56.857 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:18:56.857 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 91060 ']' 00:18:56.857 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 91060 00:18:56.857 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 91060 ']' 00:18:56.857 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 91060 00:18:56.857 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:18:56.857 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:56.857 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91060 00:18:56.857 killing process with pid 91060 00:18:56.857 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:56.857 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:56.857 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91060' 00:18:56.857 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 91060 00:18:56.857 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 91060 00:18:57.116 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:57.116 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:57.116 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:57.116 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:57.116 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:57.116 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.116 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:57.116 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.116 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:57.116 00:18:57.116 real 0m14.280s 00:18:57.116 user 0m25.717s 00:18:57.116 sys 0m1.615s 00:18:57.116 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:57.116 16:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:57.116 ************************************ 00:18:57.116 END TEST nvmf_discovery_remove_ifc 00:18:57.116 ************************************ 00:18:57.116 16:04:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:57.116 16:04:50 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:57.116 16:04:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:57.116 16:04:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:57.116 16:04:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:57.375 ************************************ 00:18:57.375 START TEST nvmf_identify_kernel_target 00:18:57.375 ************************************ 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:57.375 * Looking for test storage... 00:18:57.375 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:57.375 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:57.376 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:57.376 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:57.376 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:57.376 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:57.376 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:57.376 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:57.376 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:57.376 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:57.376 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:57.376 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:57.376 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:57.376 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:57.376 Cannot find device "nvmf_tgt_br" 00:18:57.376 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:18:57.376 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:57.376 Cannot find device "nvmf_tgt_br2" 00:18:57.376 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:18:57.376 16:04:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:57.376 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:57.376 Cannot find device "nvmf_tgt_br" 00:18:57.376 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:18:57.376 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:57.376 Cannot find device "nvmf_tgt_br2" 00:18:57.376 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:18:57.376 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:57.376 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:57.376 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:57.376 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:57.376 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:18:57.376 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:57.376 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:57.376 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:18:57.376 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:57.376 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:57.376 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:57.634 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:57.634 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:57.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:57.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:18:57.635 00:18:57.635 --- 10.0.0.2 ping statistics --- 00:18:57.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.635 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:57.635 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:57.635 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:18:57.635 00:18:57.635 --- 10.0.0.3 ping statistics --- 00:18:57.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.635 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:57.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:57.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:18:57.635 00:18:57.635 --- 10.0.0.1 ping statistics --- 00:18:57.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.635 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:57.635 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:58.202 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:58.202 Waiting for block devices as requested 00:18:58.202 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:58.202 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:58.202 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:58.202 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:58.202 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:18:58.202 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:18:58.202 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:58.202 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:58.202 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:18:58.202 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:18:58.202 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:18:58.461 No valid GPT data, bailing 00:18:58.461 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:58.461 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:18:58.461 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:18:58.461 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:18:58.461 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:58.461 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:18:58.461 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:18:58.461 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:18:58.461 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:18:58.461 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:58.461 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:18:58.461 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:18:58.461 16:04:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:18:58.461 No valid GPT data, bailing 00:18:58.461 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:18:58.461 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:18:58.461 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:18:58.461 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:18:58.461 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:58.461 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:18:58.461 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:18:58.461 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:18:58.461 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:18:58.461 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:58.461 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:18:58.461 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:18:58.461 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:18:58.461 No valid GPT data, bailing 00:18:58.461 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:18:58.461 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:18:58.461 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:18:58.461 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:18:58.461 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:58.461 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:18:58.461 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:18:58.461 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:18:58.461 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:58.461 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:58.461 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:18:58.461 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:18:58.461 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:18:58.461 No valid GPT data, bailing 00:18:58.461 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:18:58.720 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:18:58.720 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:18:58.720 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:18:58.720 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:18:58.720 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:58.720 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:58.720 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:58.720 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:18:58.720 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:18:58.720 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:18:58.720 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:18:58.720 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:18:58.720 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:18:58.720 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:18:58.720 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:18:58.720 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:58.720 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid=a185c444-aaeb-4d13-aa60-df1b0266600d -a 10.0.0.1 -t tcp -s 4420 00:18:58.720 00:18:58.720 Discovery Log Number of Records 2, Generation counter 2 00:18:58.720 =====Discovery Log Entry 0====== 00:18:58.720 trtype: tcp 00:18:58.720 adrfam: ipv4 00:18:58.720 subtype: current discovery subsystem 00:18:58.720 treq: not specified, sq flow control disable supported 00:18:58.720 portid: 1 00:18:58.720 trsvcid: 4420 00:18:58.720 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:58.720 traddr: 10.0.0.1 00:18:58.720 eflags: none 00:18:58.721 sectype: none 00:18:58.721 =====Discovery Log Entry 1====== 00:18:58.721 trtype: tcp 00:18:58.721 adrfam: ipv4 00:18:58.721 subtype: nvme subsystem 00:18:58.721 treq: not specified, sq flow control disable supported 00:18:58.721 portid: 1 00:18:58.721 trsvcid: 4420 00:18:58.721 subnqn: nqn.2016-06.io.spdk:testnqn 00:18:58.721 traddr: 10.0.0.1 00:18:58.721 eflags: none 00:18:58.721 sectype: none 00:18:58.721 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:18:58.721 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:18:58.721 ===================================================== 00:18:58.721 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:18:58.721 ===================================================== 00:18:58.721 Controller Capabilities/Features 00:18:58.721 ================================ 00:18:58.721 Vendor ID: 0000 00:18:58.721 Subsystem Vendor ID: 0000 00:18:58.721 Serial Number: 31e3aa2493bd9d3d4914 00:18:58.721 Model Number: Linux 00:18:58.721 Firmware Version: 6.7.0-68 00:18:58.721 Recommended Arb Burst: 0 00:18:58.721 IEEE OUI Identifier: 00 00 00 00:18:58.721 Multi-path I/O 00:18:58.721 May have multiple subsystem ports: No 00:18:58.721 May have multiple controllers: No 00:18:58.721 Associated with SR-IOV VF: No 00:18:58.721 Max Data Transfer Size: Unlimited 00:18:58.721 Max Number of Namespaces: 0 00:18:58.721 Max Number of I/O Queues: 1024 00:18:58.721 NVMe Specification Version (VS): 1.3 00:18:58.721 NVMe Specification Version (Identify): 1.3 00:18:58.721 Maximum Queue Entries: 1024 00:18:58.721 Contiguous Queues Required: No 00:18:58.721 Arbitration Mechanisms Supported 00:18:58.721 Weighted Round Robin: Not Supported 00:18:58.721 Vendor Specific: Not Supported 00:18:58.721 Reset Timeout: 7500 ms 00:18:58.721 Doorbell Stride: 4 bytes 00:18:58.721 NVM Subsystem Reset: Not Supported 00:18:58.721 Command Sets Supported 00:18:58.721 NVM Command Set: Supported 00:18:58.721 Boot Partition: Not Supported 00:18:58.721 Memory Page Size Minimum: 4096 bytes 00:18:58.721 Memory Page Size Maximum: 4096 bytes 00:18:58.721 Persistent Memory Region: Not Supported 00:18:58.721 Optional Asynchronous Events Supported 00:18:58.721 Namespace Attribute Notices: Not Supported 00:18:58.721 Firmware Activation Notices: Not Supported 00:18:58.721 ANA Change Notices: Not Supported 00:18:58.721 PLE Aggregate Log Change Notices: Not Supported 00:18:58.721 LBA Status Info Alert Notices: Not Supported 00:18:58.721 EGE Aggregate Log Change Notices: Not Supported 00:18:58.721 Normal NVM Subsystem Shutdown event: Not Supported 00:18:58.721 Zone Descriptor Change Notices: Not Supported 00:18:58.721 Discovery Log Change Notices: Supported 00:18:58.721 Controller Attributes 00:18:58.721 128-bit Host Identifier: Not Supported 00:18:58.721 Non-Operational Permissive Mode: Not Supported 00:18:58.721 NVM Sets: Not Supported 00:18:58.721 Read Recovery Levels: Not Supported 00:18:58.721 Endurance Groups: Not Supported 00:18:58.721 Predictable Latency Mode: Not Supported 00:18:58.721 Traffic Based Keep ALive: Not Supported 00:18:58.721 Namespace Granularity: Not Supported 00:18:58.721 SQ Associations: Not Supported 00:18:58.721 UUID List: Not Supported 00:18:58.721 Multi-Domain Subsystem: Not Supported 00:18:58.721 Fixed Capacity Management: Not Supported 00:18:58.721 Variable Capacity Management: Not Supported 00:18:58.721 Delete Endurance Group: Not Supported 00:18:58.721 Delete NVM Set: Not Supported 00:18:58.721 Extended LBA Formats Supported: Not Supported 00:18:58.721 Flexible Data Placement Supported: Not Supported 00:18:58.721 00:18:58.721 Controller Memory Buffer Support 00:18:58.721 ================================ 00:18:58.721 Supported: No 00:18:58.721 00:18:58.721 Persistent Memory Region Support 00:18:58.721 ================================ 00:18:58.721 Supported: No 00:18:58.721 00:18:58.721 Admin Command Set Attributes 00:18:58.721 ============================ 00:18:58.721 Security Send/Receive: Not Supported 00:18:58.721 Format NVM: Not Supported 00:18:58.721 Firmware Activate/Download: Not Supported 00:18:58.721 Namespace Management: Not Supported 00:18:58.721 Device Self-Test: Not Supported 00:18:58.721 Directives: Not Supported 00:18:58.721 NVMe-MI: Not Supported 00:18:58.721 Virtualization Management: Not Supported 00:18:58.721 Doorbell Buffer Config: Not Supported 00:18:58.721 Get LBA Status Capability: Not Supported 00:18:58.721 Command & Feature Lockdown Capability: Not Supported 00:18:58.721 Abort Command Limit: 1 00:18:58.721 Async Event Request Limit: 1 00:18:58.721 Number of Firmware Slots: N/A 00:18:58.721 Firmware Slot 1 Read-Only: N/A 00:18:58.721 Firmware Activation Without Reset: N/A 00:18:58.721 Multiple Update Detection Support: N/A 00:18:58.721 Firmware Update Granularity: No Information Provided 00:18:58.721 Per-Namespace SMART Log: No 00:18:58.721 Asymmetric Namespace Access Log Page: Not Supported 00:18:58.721 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:18:58.721 Command Effects Log Page: Not Supported 00:18:58.721 Get Log Page Extended Data: Supported 00:18:58.721 Telemetry Log Pages: Not Supported 00:18:58.721 Persistent Event Log Pages: Not Supported 00:18:58.721 Supported Log Pages Log Page: May Support 00:18:58.721 Commands Supported & Effects Log Page: Not Supported 00:18:58.721 Feature Identifiers & Effects Log Page:May Support 00:18:58.721 NVMe-MI Commands & Effects Log Page: May Support 00:18:58.721 Data Area 4 for Telemetry Log: Not Supported 00:18:58.721 Error Log Page Entries Supported: 1 00:18:58.721 Keep Alive: Not Supported 00:18:58.721 00:18:58.721 NVM Command Set Attributes 00:18:58.721 ========================== 00:18:58.721 Submission Queue Entry Size 00:18:58.721 Max: 1 00:18:58.721 Min: 1 00:18:58.721 Completion Queue Entry Size 00:18:58.721 Max: 1 00:18:58.721 Min: 1 00:18:58.721 Number of Namespaces: 0 00:18:58.721 Compare Command: Not Supported 00:18:58.721 Write Uncorrectable Command: Not Supported 00:18:58.722 Dataset Management Command: Not Supported 00:18:58.722 Write Zeroes Command: Not Supported 00:18:58.722 Set Features Save Field: Not Supported 00:18:58.722 Reservations: Not Supported 00:18:58.722 Timestamp: Not Supported 00:18:58.722 Copy: Not Supported 00:18:58.722 Volatile Write Cache: Not Present 00:18:58.722 Atomic Write Unit (Normal): 1 00:18:58.722 Atomic Write Unit (PFail): 1 00:18:58.722 Atomic Compare & Write Unit: 1 00:18:58.722 Fused Compare & Write: Not Supported 00:18:58.722 Scatter-Gather List 00:18:58.722 SGL Command Set: Supported 00:18:58.722 SGL Keyed: Not Supported 00:18:58.722 SGL Bit Bucket Descriptor: Not Supported 00:18:58.722 SGL Metadata Pointer: Not Supported 00:18:58.722 Oversized SGL: Not Supported 00:18:58.722 SGL Metadata Address: Not Supported 00:18:58.722 SGL Offset: Supported 00:18:58.722 Transport SGL Data Block: Not Supported 00:18:58.722 Replay Protected Memory Block: Not Supported 00:18:58.722 00:18:58.722 Firmware Slot Information 00:18:58.722 ========================= 00:18:58.722 Active slot: 0 00:18:58.722 00:18:58.722 00:18:58.722 Error Log 00:18:58.722 ========= 00:18:58.722 00:18:58.722 Active Namespaces 00:18:58.722 ================= 00:18:58.722 Discovery Log Page 00:18:58.722 ================== 00:18:58.722 Generation Counter: 2 00:18:58.722 Number of Records: 2 00:18:58.722 Record Format: 0 00:18:58.722 00:18:58.722 Discovery Log Entry 0 00:18:58.722 ---------------------- 00:18:58.722 Transport Type: 3 (TCP) 00:18:58.722 Address Family: 1 (IPv4) 00:18:58.722 Subsystem Type: 3 (Current Discovery Subsystem) 00:18:58.722 Entry Flags: 00:18:58.722 Duplicate Returned Information: 0 00:18:58.722 Explicit Persistent Connection Support for Discovery: 0 00:18:58.722 Transport Requirements: 00:18:58.722 Secure Channel: Not Specified 00:18:58.722 Port ID: 1 (0x0001) 00:18:58.722 Controller ID: 65535 (0xffff) 00:18:58.722 Admin Max SQ Size: 32 00:18:58.722 Transport Service Identifier: 4420 00:18:58.722 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:18:58.722 Transport Address: 10.0.0.1 00:18:58.722 Discovery Log Entry 1 00:18:58.722 ---------------------- 00:18:58.722 Transport Type: 3 (TCP) 00:18:58.722 Address Family: 1 (IPv4) 00:18:58.722 Subsystem Type: 2 (NVM Subsystem) 00:18:58.722 Entry Flags: 00:18:58.722 Duplicate Returned Information: 0 00:18:58.722 Explicit Persistent Connection Support for Discovery: 0 00:18:58.722 Transport Requirements: 00:18:58.722 Secure Channel: Not Specified 00:18:58.722 Port ID: 1 (0x0001) 00:18:58.722 Controller ID: 65535 (0xffff) 00:18:58.722 Admin Max SQ Size: 32 00:18:58.722 Transport Service Identifier: 4420 00:18:58.722 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:18:58.722 Transport Address: 10.0.0.1 00:18:58.722 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:18:58.981 get_feature(0x01) failed 00:18:58.981 get_feature(0x02) failed 00:18:58.981 get_feature(0x04) failed 00:18:58.981 ===================================================== 00:18:58.981 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:18:58.981 ===================================================== 00:18:58.981 Controller Capabilities/Features 00:18:58.981 ================================ 00:18:58.981 Vendor ID: 0000 00:18:58.981 Subsystem Vendor ID: 0000 00:18:58.981 Serial Number: 034e37c7f5eda0ca533c 00:18:58.981 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:18:58.981 Firmware Version: 6.7.0-68 00:18:58.981 Recommended Arb Burst: 6 00:18:58.981 IEEE OUI Identifier: 00 00 00 00:18:58.981 Multi-path I/O 00:18:58.981 May have multiple subsystem ports: Yes 00:18:58.981 May have multiple controllers: Yes 00:18:58.981 Associated with SR-IOV VF: No 00:18:58.981 Max Data Transfer Size: Unlimited 00:18:58.981 Max Number of Namespaces: 1024 00:18:58.981 Max Number of I/O Queues: 128 00:18:58.981 NVMe Specification Version (VS): 1.3 00:18:58.981 NVMe Specification Version (Identify): 1.3 00:18:58.981 Maximum Queue Entries: 1024 00:18:58.981 Contiguous Queues Required: No 00:18:58.981 Arbitration Mechanisms Supported 00:18:58.981 Weighted Round Robin: Not Supported 00:18:58.981 Vendor Specific: Not Supported 00:18:58.981 Reset Timeout: 7500 ms 00:18:58.981 Doorbell Stride: 4 bytes 00:18:58.981 NVM Subsystem Reset: Not Supported 00:18:58.981 Command Sets Supported 00:18:58.981 NVM Command Set: Supported 00:18:58.981 Boot Partition: Not Supported 00:18:58.981 Memory Page Size Minimum: 4096 bytes 00:18:58.981 Memory Page Size Maximum: 4096 bytes 00:18:58.981 Persistent Memory Region: Not Supported 00:18:58.981 Optional Asynchronous Events Supported 00:18:58.981 Namespace Attribute Notices: Supported 00:18:58.981 Firmware Activation Notices: Not Supported 00:18:58.981 ANA Change Notices: Supported 00:18:58.981 PLE Aggregate Log Change Notices: Not Supported 00:18:58.981 LBA Status Info Alert Notices: Not Supported 00:18:58.981 EGE Aggregate Log Change Notices: Not Supported 00:18:58.981 Normal NVM Subsystem Shutdown event: Not Supported 00:18:58.981 Zone Descriptor Change Notices: Not Supported 00:18:58.981 Discovery Log Change Notices: Not Supported 00:18:58.981 Controller Attributes 00:18:58.981 128-bit Host Identifier: Supported 00:18:58.981 Non-Operational Permissive Mode: Not Supported 00:18:58.981 NVM Sets: Not Supported 00:18:58.981 Read Recovery Levels: Not Supported 00:18:58.982 Endurance Groups: Not Supported 00:18:58.982 Predictable Latency Mode: Not Supported 00:18:58.982 Traffic Based Keep ALive: Supported 00:18:58.982 Namespace Granularity: Not Supported 00:18:58.982 SQ Associations: Not Supported 00:18:58.982 UUID List: Not Supported 00:18:58.982 Multi-Domain Subsystem: Not Supported 00:18:58.982 Fixed Capacity Management: Not Supported 00:18:58.982 Variable Capacity Management: Not Supported 00:18:58.982 Delete Endurance Group: Not Supported 00:18:58.982 Delete NVM Set: Not Supported 00:18:58.982 Extended LBA Formats Supported: Not Supported 00:18:58.982 Flexible Data Placement Supported: Not Supported 00:18:58.982 00:18:58.982 Controller Memory Buffer Support 00:18:58.982 ================================ 00:18:58.982 Supported: No 00:18:58.982 00:18:58.982 Persistent Memory Region Support 00:18:58.982 ================================ 00:18:58.982 Supported: No 00:18:58.982 00:18:58.982 Admin Command Set Attributes 00:18:58.982 ============================ 00:18:58.982 Security Send/Receive: Not Supported 00:18:58.982 Format NVM: Not Supported 00:18:58.982 Firmware Activate/Download: Not Supported 00:18:58.982 Namespace Management: Not Supported 00:18:58.982 Device Self-Test: Not Supported 00:18:58.982 Directives: Not Supported 00:18:58.982 NVMe-MI: Not Supported 00:18:58.982 Virtualization Management: Not Supported 00:18:58.982 Doorbell Buffer Config: Not Supported 00:18:58.982 Get LBA Status Capability: Not Supported 00:18:58.982 Command & Feature Lockdown Capability: Not Supported 00:18:58.982 Abort Command Limit: 4 00:18:58.982 Async Event Request Limit: 4 00:18:58.982 Number of Firmware Slots: N/A 00:18:58.982 Firmware Slot 1 Read-Only: N/A 00:18:58.982 Firmware Activation Without Reset: N/A 00:18:58.982 Multiple Update Detection Support: N/A 00:18:58.982 Firmware Update Granularity: No Information Provided 00:18:58.982 Per-Namespace SMART Log: Yes 00:18:58.982 Asymmetric Namespace Access Log Page: Supported 00:18:58.982 ANA Transition Time : 10 sec 00:18:58.982 00:18:58.982 Asymmetric Namespace Access Capabilities 00:18:58.982 ANA Optimized State : Supported 00:18:58.982 ANA Non-Optimized State : Supported 00:18:58.982 ANA Inaccessible State : Supported 00:18:58.982 ANA Persistent Loss State : Supported 00:18:58.982 ANA Change State : Supported 00:18:58.982 ANAGRPID is not changed : No 00:18:58.982 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:18:58.982 00:18:58.982 ANA Group Identifier Maximum : 128 00:18:58.982 Number of ANA Group Identifiers : 128 00:18:58.982 Max Number of Allowed Namespaces : 1024 00:18:58.982 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:18:58.982 Command Effects Log Page: Supported 00:18:58.982 Get Log Page Extended Data: Supported 00:18:58.982 Telemetry Log Pages: Not Supported 00:18:58.982 Persistent Event Log Pages: Not Supported 00:18:58.982 Supported Log Pages Log Page: May Support 00:18:58.982 Commands Supported & Effects Log Page: Not Supported 00:18:58.982 Feature Identifiers & Effects Log Page:May Support 00:18:58.982 NVMe-MI Commands & Effects Log Page: May Support 00:18:58.982 Data Area 4 for Telemetry Log: Not Supported 00:18:58.982 Error Log Page Entries Supported: 128 00:18:58.982 Keep Alive: Supported 00:18:58.982 Keep Alive Granularity: 1000 ms 00:18:58.982 00:18:58.982 NVM Command Set Attributes 00:18:58.982 ========================== 00:18:58.982 Submission Queue Entry Size 00:18:58.982 Max: 64 00:18:58.982 Min: 64 00:18:58.982 Completion Queue Entry Size 00:18:58.982 Max: 16 00:18:58.982 Min: 16 00:18:58.982 Number of Namespaces: 1024 00:18:58.982 Compare Command: Not Supported 00:18:58.982 Write Uncorrectable Command: Not Supported 00:18:58.982 Dataset Management Command: Supported 00:18:58.982 Write Zeroes Command: Supported 00:18:58.982 Set Features Save Field: Not Supported 00:18:58.982 Reservations: Not Supported 00:18:58.982 Timestamp: Not Supported 00:18:58.982 Copy: Not Supported 00:18:58.982 Volatile Write Cache: Present 00:18:58.982 Atomic Write Unit (Normal): 1 00:18:58.982 Atomic Write Unit (PFail): 1 00:18:58.982 Atomic Compare & Write Unit: 1 00:18:58.982 Fused Compare & Write: Not Supported 00:18:58.982 Scatter-Gather List 00:18:58.982 SGL Command Set: Supported 00:18:58.982 SGL Keyed: Not Supported 00:18:58.982 SGL Bit Bucket Descriptor: Not Supported 00:18:58.982 SGL Metadata Pointer: Not Supported 00:18:58.982 Oversized SGL: Not Supported 00:18:58.982 SGL Metadata Address: Not Supported 00:18:58.982 SGL Offset: Supported 00:18:58.982 Transport SGL Data Block: Not Supported 00:18:58.982 Replay Protected Memory Block: Not Supported 00:18:58.982 00:18:58.982 Firmware Slot Information 00:18:58.982 ========================= 00:18:58.982 Active slot: 0 00:18:58.982 00:18:58.982 Asymmetric Namespace Access 00:18:58.982 =========================== 00:18:58.982 Change Count : 0 00:18:58.982 Number of ANA Group Descriptors : 1 00:18:58.982 ANA Group Descriptor : 0 00:18:58.982 ANA Group ID : 1 00:18:58.982 Number of NSID Values : 1 00:18:58.982 Change Count : 0 00:18:58.982 ANA State : 1 00:18:58.982 Namespace Identifier : 1 00:18:58.982 00:18:58.982 Commands Supported and Effects 00:18:58.982 ============================== 00:18:58.982 Admin Commands 00:18:58.982 -------------- 00:18:58.982 Get Log Page (02h): Supported 00:18:58.982 Identify (06h): Supported 00:18:58.982 Abort (08h): Supported 00:18:58.982 Set Features (09h): Supported 00:18:58.982 Get Features (0Ah): Supported 00:18:58.982 Asynchronous Event Request (0Ch): Supported 00:18:58.982 Keep Alive (18h): Supported 00:18:58.982 I/O Commands 00:18:58.982 ------------ 00:18:58.982 Flush (00h): Supported 00:18:58.982 Write (01h): Supported LBA-Change 00:18:58.982 Read (02h): Supported 00:18:58.982 Write Zeroes (08h): Supported LBA-Change 00:18:58.982 Dataset Management (09h): Supported 00:18:58.982 00:18:58.982 Error Log 00:18:58.982 ========= 00:18:58.982 Entry: 0 00:18:58.982 Error Count: 0x3 00:18:58.982 Submission Queue Id: 0x0 00:18:58.982 Command Id: 0x5 00:18:58.982 Phase Bit: 0 00:18:58.982 Status Code: 0x2 00:18:58.983 Status Code Type: 0x0 00:18:58.983 Do Not Retry: 1 00:18:58.983 Error Location: 0x28 00:18:58.983 LBA: 0x0 00:18:58.983 Namespace: 0x0 00:18:58.983 Vendor Log Page: 0x0 00:18:58.983 ----------- 00:18:58.983 Entry: 1 00:18:58.983 Error Count: 0x2 00:18:58.983 Submission Queue Id: 0x0 00:18:58.983 Command Id: 0x5 00:18:58.983 Phase Bit: 0 00:18:58.983 Status Code: 0x2 00:18:58.983 Status Code Type: 0x0 00:18:58.983 Do Not Retry: 1 00:18:58.983 Error Location: 0x28 00:18:58.983 LBA: 0x0 00:18:58.983 Namespace: 0x0 00:18:58.983 Vendor Log Page: 0x0 00:18:58.983 ----------- 00:18:58.983 Entry: 2 00:18:58.983 Error Count: 0x1 00:18:58.983 Submission Queue Id: 0x0 00:18:58.983 Command Id: 0x4 00:18:58.983 Phase Bit: 0 00:18:58.983 Status Code: 0x2 00:18:58.983 Status Code Type: 0x0 00:18:58.983 Do Not Retry: 1 00:18:58.983 Error Location: 0x28 00:18:58.983 LBA: 0x0 00:18:58.983 Namespace: 0x0 00:18:58.983 Vendor Log Page: 0x0 00:18:58.983 00:18:58.983 Number of Queues 00:18:58.983 ================ 00:18:58.983 Number of I/O Submission Queues: 128 00:18:58.983 Number of I/O Completion Queues: 128 00:18:58.983 00:18:58.983 ZNS Specific Controller Data 00:18:58.983 ============================ 00:18:58.983 Zone Append Size Limit: 0 00:18:58.983 00:18:58.983 00:18:58.983 Active Namespaces 00:18:58.983 ================= 00:18:58.983 get_feature(0x05) failed 00:18:58.983 Namespace ID:1 00:18:58.983 Command Set Identifier: NVM (00h) 00:18:58.983 Deallocate: Supported 00:18:58.983 Deallocated/Unwritten Error: Not Supported 00:18:58.983 Deallocated Read Value: Unknown 00:18:58.983 Deallocate in Write Zeroes: Not Supported 00:18:58.983 Deallocated Guard Field: 0xFFFF 00:18:58.983 Flush: Supported 00:18:58.983 Reservation: Not Supported 00:18:58.983 Namespace Sharing Capabilities: Multiple Controllers 00:18:58.983 Size (in LBAs): 1310720 (5GiB) 00:18:58.983 Capacity (in LBAs): 1310720 (5GiB) 00:18:58.983 Utilization (in LBAs): 1310720 (5GiB) 00:18:58.983 UUID: b1db84ff-83dc-4bcc-ac81-136a62e48f25 00:18:58.983 Thin Provisioning: Not Supported 00:18:58.983 Per-NS Atomic Units: Yes 00:18:58.983 Atomic Boundary Size (Normal): 0 00:18:58.983 Atomic Boundary Size (PFail): 0 00:18:58.983 Atomic Boundary Offset: 0 00:18:58.983 NGUID/EUI64 Never Reused: No 00:18:58.983 ANA group ID: 1 00:18:58.983 Namespace Write Protected: No 00:18:58.983 Number of LBA Formats: 1 00:18:58.983 Current LBA Format: LBA Format #00 00:18:58.983 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:18:58.983 00:18:58.983 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:18:58.983 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:58.983 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:18:58.983 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:58.983 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:18:58.983 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:58.983 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:58.983 rmmod nvme_tcp 00:18:58.983 rmmod nvme_fabrics 00:18:58.983 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:58.983 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:18:58.983 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:18:58.983 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:58.983 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:58.983 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:58.983 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:58.983 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:58.983 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:58.983 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.983 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:58.983 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.241 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:59.241 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:18:59.241 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:18:59.241 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:18:59.241 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:59.241 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:59.241 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:59.241 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:59.241 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:18:59.241 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:18:59.241 16:04:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:59.807 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:00.066 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:00.066 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:00.066 00:19:00.066 real 0m2.859s 00:19:00.066 user 0m0.988s 00:19:00.066 sys 0m1.349s 00:19:00.066 16:04:53 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:00.066 16:04:53 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.066 ************************************ 00:19:00.066 END TEST nvmf_identify_kernel_target 00:19:00.066 ************************************ 00:19:00.066 16:04:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:00.066 16:04:53 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:00.066 16:04:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:00.066 16:04:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:00.066 16:04:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:00.066 ************************************ 00:19:00.066 START TEST nvmf_auth_host 00:19:00.066 ************************************ 00:19:00.066 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:00.325 * Looking for test storage... 00:19:00.325 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:00.325 Cannot find device "nvmf_tgt_br" 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:00.325 Cannot find device "nvmf_tgt_br2" 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:00.325 Cannot find device "nvmf_tgt_br" 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:00.325 Cannot find device "nvmf_tgt_br2" 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:19:00.325 16:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:00.325 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:00.325 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:00.325 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:00.325 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:19:00.325 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:00.325 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:00.325 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:19:00.325 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:00.325 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:00.325 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:00.325 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:00.325 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:00.583 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:00.583 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:00.583 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:00.583 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:00.583 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:00.583 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:00.583 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:00.584 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:00.584 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:00.584 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:00.584 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:00.584 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:00.584 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:00.584 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:00.584 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:00.584 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:00.584 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:00.584 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:00.584 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:00.584 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:00.584 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:19:00.584 00:19:00.584 --- 10.0.0.2 ping statistics --- 00:19:00.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.584 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:19:00.584 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:00.584 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:00.584 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:19:00.584 00:19:00.584 --- 10.0.0.3 ping statistics --- 00:19:00.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.584 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:19:00.584 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:00.584 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:00.584 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:19:00.584 00:19:00.584 --- 10.0.0.1 ping statistics --- 00:19:00.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.584 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:19:00.584 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:00.584 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:19:00.584 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:00.584 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:00.584 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:00.584 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:00.584 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:00.584 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:00.584 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:00.584 16:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:19:00.584 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:00.584 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:00.584 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.584 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=91997 00:19:00.584 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 91997 00:19:00.584 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 91997 ']' 00:19:00.584 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.584 16:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:19:00.584 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:00.584 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.584 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:00.584 16:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.957 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:01.957 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:19:01.957 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:01.957 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:01.957 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.957 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.957 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:19:01.957 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:19:01.957 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:01.957 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:01.957 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:01.957 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:19:01.957 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:19:01.957 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:01.957 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d90e2e01b8ac9a61fe045a36dc0632bd 00:19:01.957 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:01.957 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Sr5 00:19:01.957 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d90e2e01b8ac9a61fe045a36dc0632bd 0 00:19:01.957 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d90e2e01b8ac9a61fe045a36dc0632bd 0 00:19:01.957 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:01.957 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:01.957 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d90e2e01b8ac9a61fe045a36dc0632bd 00:19:01.957 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:19:01.957 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:01.957 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Sr5 00:19:01.957 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Sr5 00:19:01.957 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Sr5 00:19:01.957 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:19:01.957 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:01.957 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=897beb9479ff6005385508c560df1aa41516e498ffaca68a5790b3e4a5257800 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.3Cy 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 897beb9479ff6005385508c560df1aa41516e498ffaca68a5790b3e4a5257800 3 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 897beb9479ff6005385508c560df1aa41516e498ffaca68a5790b3e4a5257800 3 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=897beb9479ff6005385508c560df1aa41516e498ffaca68a5790b3e4a5257800 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.3Cy 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.3Cy 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.3Cy 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5cd3d26a52ae0cc52c9ecc668b353e7594398407ffc96c1c 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.hle 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5cd3d26a52ae0cc52c9ecc668b353e7594398407ffc96c1c 0 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5cd3d26a52ae0cc52c9ecc668b353e7594398407ffc96c1c 0 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5cd3d26a52ae0cc52c9ecc668b353e7594398407ffc96c1c 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.hle 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.hle 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.hle 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a966812f289e30ed639df1b11966ad94734e09bab3e35283 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.sU6 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a966812f289e30ed639df1b11966ad94734e09bab3e35283 2 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a966812f289e30ed639df1b11966ad94734e09bab3e35283 2 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a966812f289e30ed639df1b11966ad94734e09bab3e35283 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.sU6 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.sU6 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.sU6 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cd0142f2625c365d1cc699e72c37bb15 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.uAK 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cd0142f2625c365d1cc699e72c37bb15 1 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cd0142f2625c365d1cc699e72c37bb15 1 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cd0142f2625c365d1cc699e72c37bb15 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:19:01.958 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:02.253 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.uAK 00:19:02.253 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.uAK 00:19:02.253 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.uAK 00:19:02.253 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:02.253 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:02.253 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:02.253 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:02.253 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:19:02.253 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:19:02.253 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:02.253 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3ae2aac9aff5d61ea0b092fab392884f 00:19:02.253 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:02.253 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.C5Y 00:19:02.253 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3ae2aac9aff5d61ea0b092fab392884f 1 00:19:02.253 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3ae2aac9aff5d61ea0b092fab392884f 1 00:19:02.253 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:02.253 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:02.253 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3ae2aac9aff5d61ea0b092fab392884f 00:19:02.253 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:19:02.253 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:02.253 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.C5Y 00:19:02.253 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.C5Y 00:19:02.253 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.C5Y 00:19:02.253 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:19:02.253 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:02.253 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:02.253 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:02.253 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:19:02.253 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:19:02.253 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:02.253 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2af46c4cde44d73fd9e0ebec8712d09276a0f9c5362bb4c8 00:19:02.253 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:02.253 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.U1H 00:19:02.253 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2af46c4cde44d73fd9e0ebec8712d09276a0f9c5362bb4c8 2 00:19:02.253 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2af46c4cde44d73fd9e0ebec8712d09276a0f9c5362bb4c8 2 00:19:02.253 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:02.253 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2af46c4cde44d73fd9e0ebec8712d09276a0f9c5362bb4c8 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.U1H 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.U1H 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.U1H 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=35881a17490063ceb6484a4befdcd16a 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.6Pp 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 35881a17490063ceb6484a4befdcd16a 0 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 35881a17490063ceb6484a4befdcd16a 0 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=35881a17490063ceb6484a4befdcd16a 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.6Pp 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.6Pp 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.6Pp 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b7e2ee8979e28a7cdbe4a1ad0adc09118255946a6bc24298bbdde9fa12819a81 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ZFg 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b7e2ee8979e28a7cdbe4a1ad0adc09118255946a6bc24298bbdde9fa12819a81 3 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b7e2ee8979e28a7cdbe4a1ad0adc09118255946a6bc24298bbdde9fa12819a81 3 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b7e2ee8979e28a7cdbe4a1ad0adc09118255946a6bc24298bbdde9fa12819a81 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ZFg 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ZFg 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.ZFg 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 91997 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 91997 ']' 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:02.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:02.254 16:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.519 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:02.519 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:19:02.519 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:02.519 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Sr5 00:19:02.519 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.519 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.3Cy ]] 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3Cy 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.hle 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.sU6 ]] 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.sU6 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.uAK 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.C5Y ]] 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.C5Y 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.U1H 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.6Pp ]] 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.6Pp 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.ZFg 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:02.778 16:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:03.036 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:03.036 Waiting for block devices as requested 00:19:03.036 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:03.294 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:03.900 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:03.900 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:03.900 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:19:03.900 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:19:03.900 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:03.900 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:03.900 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:19:03.900 16:04:57 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:19:03.900 16:04:57 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:03.900 No valid GPT data, bailing 00:19:03.900 16:04:57 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:03.900 16:04:57 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:03.900 16:04:57 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:03.900 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:19:03.900 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:03.900 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:03.900 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:19:03.900 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:19:03.900 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:03.900 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:03.900 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:19:03.900 16:04:57 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:19:03.900 16:04:57 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:03.900 No valid GPT data, bailing 00:19:03.900 16:04:57 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:03.900 16:04:57 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:03.900 16:04:57 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:03.900 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:19:03.900 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:03.900 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:03.900 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:19:03.900 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:19:03.900 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:03.900 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:03.900 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:19:03.900 16:04:57 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:19:03.900 16:04:57 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:04.158 No valid GPT data, bailing 00:19:04.158 16:04:57 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:04.158 16:04:57 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:04.158 16:04:57 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:04.158 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:19:04.158 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:04.159 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:04.159 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:19:04.159 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:19:04.159 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:04.159 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:04.159 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:19:04.159 16:04:57 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:19:04.159 16:04:57 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:04.159 No valid GPT data, bailing 00:19:04.159 16:04:57 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:04.159 16:04:57 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:04.159 16:04:57 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:04.159 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:19:04.159 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:19:04.159 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:04.159 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:04.159 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:04.159 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:19:04.159 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:19:04.159 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:19:04.159 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:19:04.159 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:19:04.159 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:19:04.159 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:19:04.159 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:19:04.159 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:04.159 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid=a185c444-aaeb-4d13-aa60-df1b0266600d -a 10.0.0.1 -t tcp -s 4420 00:19:04.159 00:19:04.159 Discovery Log Number of Records 2, Generation counter 2 00:19:04.159 =====Discovery Log Entry 0====== 00:19:04.159 trtype: tcp 00:19:04.159 adrfam: ipv4 00:19:04.159 subtype: current discovery subsystem 00:19:04.159 treq: not specified, sq flow control disable supported 00:19:04.159 portid: 1 00:19:04.159 trsvcid: 4420 00:19:04.159 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:04.159 traddr: 10.0.0.1 00:19:04.159 eflags: none 00:19:04.159 sectype: none 00:19:04.159 =====Discovery Log Entry 1====== 00:19:04.159 trtype: tcp 00:19:04.159 adrfam: ipv4 00:19:04.159 subtype: nvme subsystem 00:19:04.159 treq: not specified, sq flow control disable supported 00:19:04.159 portid: 1 00:19:04.159 trsvcid: 4420 00:19:04.159 subnqn: nqn.2024-02.io.spdk:cnode0 00:19:04.159 traddr: 10.0.0.1 00:19:04.159 eflags: none 00:19:04.159 sectype: none 00:19:04.159 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:04.159 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:19:04.159 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:04.159 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:04.159 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:04.159 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:04.159 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:04.159 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:04.159 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWNkM2QyNmE1MmFlMGNjNTJjOWVjYzY2OGIzNTNlNzU5NDM5ODQwN2ZmYzk2YzFjLphefg==: 00:19:04.159 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: 00:19:04.159 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:04.159 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:04.417 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWNkM2QyNmE1MmFlMGNjNTJjOWVjYzY2OGIzNTNlNzU5NDM5ODQwN2ZmYzk2YzFjLphefg==: 00:19:04.417 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: ]] 00:19:04.417 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: 00:19:04.417 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:04.417 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:19:04.417 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:04.417 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:04.417 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:19:04.417 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:04.417 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:19:04.417 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:04.417 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:04.417 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:04.417 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:04.417 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.417 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.417 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.417 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:04.417 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:04.417 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:04.417 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:04.417 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:04.417 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:04.417 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:04.417 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:04.417 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:04.417 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:04.417 16:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:04.417 16:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.417 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.417 16:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.417 nvme0n1 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDkwZTJlMDFiOGFjOWE2MWZlMDQ1YTM2ZGMwNjMyYmQ3+n1/: 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDkwZTJlMDFiOGFjOWE2MWZlMDQ1YTM2ZGMwNjMyYmQ3+n1/: 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: ]] 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:04.417 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.674 nvme0n1 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWNkM2QyNmE1MmFlMGNjNTJjOWVjYzY2OGIzNTNlNzU5NDM5ODQwN2ZmYzk2YzFjLphefg==: 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWNkM2QyNmE1MmFlMGNjNTJjOWVjYzY2OGIzNTNlNzU5NDM5ODQwN2ZmYzk2YzFjLphefg==: 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: ]] 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.674 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.932 nvme0n1 00:19:04.932 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.932 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:04.932 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2QwMTQyZjI2MjVjMzY1ZDFjYzY5OWU3MmMzN2JiMTVueiOL: 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2QwMTQyZjI2MjVjMzY1ZDFjYzY5OWU3MmMzN2JiMTVueiOL: 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: ]] 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.933 nvme0n1 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmFmNDZjNGNkZTQ0ZDczZmQ5ZTBlYmVjODcxMmQwOTI3NmEwZjljNTM2MmJiNGM4RARHUQ==: 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmFmNDZjNGNkZTQ0ZDczZmQ5ZTBlYmVjODcxMmQwOTI3NmEwZjljNTM2MmJiNGM4RARHUQ==: 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: ]] 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.933 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.934 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:04.934 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:04.934 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:04.934 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:04.934 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:04.934 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:05.191 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:05.191 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:05.191 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:05.191 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:05.191 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:05.191 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:05.191 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.191 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.191 nvme0n1 00:19:05.191 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.191 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:05.191 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.191 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.191 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:05.191 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.191 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.191 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:05.191 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.191 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.191 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.191 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:05.191 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:19:05.191 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:05.191 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:05.191 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:05.191 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:05.191 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjdlMmVlODk3OWUyOGE3Y2RiZTRhMWFkMGFkYzA5MTE4MjU1OTQ2YTZiYzI0Mjk4YmJkZGU5ZmExMjgxOWE4MYUHHkw=: 00:19:05.191 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:05.191 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:05.191 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:05.191 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjdlMmVlODk3OWUyOGE3Y2RiZTRhMWFkMGFkYzA5MTE4MjU1OTQ2YTZiYzI0Mjk4YmJkZGU5ZmExMjgxOWE4MYUHHkw=: 00:19:05.191 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:05.191 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:19:05.191 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:05.191 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:05.191 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:05.192 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:05.192 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:05.192 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:05.192 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.192 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.192 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.192 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:05.192 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:05.192 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:05.192 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:05.192 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:05.192 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:05.192 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:05.192 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:05.192 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:05.192 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:05.192 16:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:05.192 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:05.192 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.192 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.449 nvme0n1 00:19:05.449 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.449 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:05.449 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.449 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.449 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:05.449 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.449 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.449 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:05.449 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.449 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.449 16:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.449 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:05.449 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:05.449 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:19:05.449 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:05.449 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:05.449 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:05.449 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:05.449 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDkwZTJlMDFiOGFjOWE2MWZlMDQ1YTM2ZGMwNjMyYmQ3+n1/: 00:19:05.449 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: 00:19:05.449 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:05.449 16:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:05.707 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDkwZTJlMDFiOGFjOWE2MWZlMDQ1YTM2ZGMwNjMyYmQ3+n1/: 00:19:05.707 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: ]] 00:19:05.707 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: 00:19:05.707 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:19:05.707 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:05.707 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:05.707 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:05.707 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:05.707 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:05.707 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:05.707 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.707 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.707 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.707 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:05.707 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:05.707 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:05.707 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:05.707 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:05.707 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:05.707 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:05.707 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:05.707 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:05.707 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:05.707 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:05.707 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.707 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.707 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.707 nvme0n1 00:19:05.707 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.707 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:05.707 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:05.707 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.707 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWNkM2QyNmE1MmFlMGNjNTJjOWVjYzY2OGIzNTNlNzU5NDM5ODQwN2ZmYzk2YzFjLphefg==: 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWNkM2QyNmE1MmFlMGNjNTJjOWVjYzY2OGIzNTNlNzU5NDM5ODQwN2ZmYzk2YzFjLphefg==: 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: ]] 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.963 nvme0n1 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2QwMTQyZjI2MjVjMzY1ZDFjYzY5OWU3MmMzN2JiMTVueiOL: 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2QwMTQyZjI2MjVjMzY1ZDFjYzY5OWU3MmMzN2JiMTVueiOL: 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: ]] 00:19:05.963 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: 00:19:05.964 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:19:05.964 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:05.964 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:05.964 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:05.964 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:05.964 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:05.964 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:05.964 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.964 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.220 nvme0n1 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmFmNDZjNGNkZTQ0ZDczZmQ5ZTBlYmVjODcxMmQwOTI3NmEwZjljNTM2MmJiNGM4RARHUQ==: 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmFmNDZjNGNkZTQ0ZDczZmQ5ZTBlYmVjODcxMmQwOTI3NmEwZjljNTM2MmJiNGM4RARHUQ==: 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: ]] 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.220 16:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.477 nvme0n1 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjdlMmVlODk3OWUyOGE3Y2RiZTRhMWFkMGFkYzA5MTE4MjU1OTQ2YTZiYzI0Mjk4YmJkZGU5ZmExMjgxOWE4MYUHHkw=: 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjdlMmVlODk3OWUyOGE3Y2RiZTRhMWFkMGFkYzA5MTE4MjU1OTQ2YTZiYzI0Mjk4YmJkZGU5ZmExMjgxOWE4MYUHHkw=: 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.477 16:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.787 nvme0n1 00:19:06.787 16:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.787 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:06.787 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:06.787 16:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.787 16:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.787 16:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.787 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.787 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:06.787 16:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.787 16:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.787 16:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.787 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:06.787 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:06.787 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:19:06.787 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:06.787 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:06.787 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:06.787 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:06.787 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDkwZTJlMDFiOGFjOWE2MWZlMDQ1YTM2ZGMwNjMyYmQ3+n1/: 00:19:06.787 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: 00:19:06.787 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:06.787 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:07.366 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDkwZTJlMDFiOGFjOWE2MWZlMDQ1YTM2ZGMwNjMyYmQ3+n1/: 00:19:07.366 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: ]] 00:19:07.366 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: 00:19:07.366 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:19:07.366 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:07.366 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:07.366 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:07.367 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:07.367 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:07.367 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:07.367 16:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.367 16:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.367 16:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.367 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:07.367 16:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:07.367 16:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:07.367 16:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:07.367 16:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:07.367 16:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:07.367 16:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:07.367 16:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:07.367 16:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:07.367 16:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:07.367 16:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:07.367 16:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.367 16:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.367 16:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.367 nvme0n1 00:19:07.367 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.367 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:07.367 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.367 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:07.367 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWNkM2QyNmE1MmFlMGNjNTJjOWVjYzY2OGIzNTNlNzU5NDM5ODQwN2ZmYzk2YzFjLphefg==: 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWNkM2QyNmE1MmFlMGNjNTJjOWVjYzY2OGIzNTNlNzU5NDM5ODQwN2ZmYzk2YzFjLphefg==: 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: ]] 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.624 nvme0n1 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:07.624 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2QwMTQyZjI2MjVjMzY1ZDFjYzY5OWU3MmMzN2JiMTVueiOL: 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2QwMTQyZjI2MjVjMzY1ZDFjYzY5OWU3MmMzN2JiMTVueiOL: 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: ]] 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.882 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.140 nvme0n1 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmFmNDZjNGNkZTQ0ZDczZmQ5ZTBlYmVjODcxMmQwOTI3NmEwZjljNTM2MmJiNGM4RARHUQ==: 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmFmNDZjNGNkZTQ0ZDczZmQ5ZTBlYmVjODcxMmQwOTI3NmEwZjljNTM2MmJiNGM4RARHUQ==: 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: ]] 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.140 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.398 nvme0n1 00:19:08.398 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.398 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:08.398 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.398 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.398 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:08.398 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.398 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.398 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:08.398 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.398 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.398 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.398 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:08.398 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:19:08.398 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:08.398 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:08.399 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:08.399 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:08.399 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjdlMmVlODk3OWUyOGE3Y2RiZTRhMWFkMGFkYzA5MTE4MjU1OTQ2YTZiYzI0Mjk4YmJkZGU5ZmExMjgxOWE4MYUHHkw=: 00:19:08.399 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:08.399 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:08.399 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:08.399 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjdlMmVlODk3OWUyOGE3Y2RiZTRhMWFkMGFkYzA5MTE4MjU1OTQ2YTZiYzI0Mjk4YmJkZGU5ZmExMjgxOWE4MYUHHkw=: 00:19:08.399 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:08.399 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:19:08.399 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:08.399 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:08.399 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:08.399 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:08.399 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:08.399 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:08.399 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.399 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.399 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.399 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:08.399 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:08.399 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:08.399 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:08.399 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:08.399 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:08.399 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:08.399 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:08.399 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:08.399 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:08.399 16:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:08.399 16:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:08.399 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.399 16:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.656 nvme0n1 00:19:08.656 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.656 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:08.656 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.656 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.656 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:08.656 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.656 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.656 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:08.656 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.656 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.656 16:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.656 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:08.656 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:08.656 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:19:08.656 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:08.656 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:08.656 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:08.656 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:08.656 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDkwZTJlMDFiOGFjOWE2MWZlMDQ1YTM2ZGMwNjMyYmQ3+n1/: 00:19:08.656 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: 00:19:08.656 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:08.656 16:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:10.553 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDkwZTJlMDFiOGFjOWE2MWZlMDQ1YTM2ZGMwNjMyYmQ3+n1/: 00:19:10.553 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: ]] 00:19:10.553 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: 00:19:10.553 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:19:10.553 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:10.553 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:10.553 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:10.553 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:10.553 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:10.553 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:10.553 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.553 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.553 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.553 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:10.553 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:10.553 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:10.553 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:10.553 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:10.553 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:10.553 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:10.553 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:10.553 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:10.553 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:10.553 16:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:10.553 16:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.553 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.553 16:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.553 nvme0n1 00:19:10.553 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.553 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:10.553 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.553 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.553 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWNkM2QyNmE1MmFlMGNjNTJjOWVjYzY2OGIzNTNlNzU5NDM5ODQwN2ZmYzk2YzFjLphefg==: 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWNkM2QyNmE1MmFlMGNjNTJjOWVjYzY2OGIzNTNlNzU5NDM5ODQwN2ZmYzk2YzFjLphefg==: 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: ]] 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.811 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.069 nvme0n1 00:19:11.069 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.069 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:11.069 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:11.069 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.069 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.069 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.069 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.069 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:11.069 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.069 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.069 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.069 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:11.069 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:19:11.069 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:11.069 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:11.070 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:11.070 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:11.070 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2QwMTQyZjI2MjVjMzY1ZDFjYzY5OWU3MmMzN2JiMTVueiOL: 00:19:11.070 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: 00:19:11.070 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:11.070 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:11.070 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2QwMTQyZjI2MjVjMzY1ZDFjYzY5OWU3MmMzN2JiMTVueiOL: 00:19:11.070 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: ]] 00:19:11.070 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: 00:19:11.070 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:19:11.070 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:11.070 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:11.070 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:11.070 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:11.070 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:11.070 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:11.070 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.070 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.070 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.070 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:11.070 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:11.070 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:11.070 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:11.070 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:11.070 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:11.070 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:11.070 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:11.070 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:11.070 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:11.070 16:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:11.070 16:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.070 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.070 16:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.666 nvme0n1 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmFmNDZjNGNkZTQ0ZDczZmQ5ZTBlYmVjODcxMmQwOTI3NmEwZjljNTM2MmJiNGM4RARHUQ==: 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmFmNDZjNGNkZTQ0ZDczZmQ5ZTBlYmVjODcxMmQwOTI3NmEwZjljNTM2MmJiNGM4RARHUQ==: 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: ]] 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.666 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.923 nvme0n1 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjdlMmVlODk3OWUyOGE3Y2RiZTRhMWFkMGFkYzA5MTE4MjU1OTQ2YTZiYzI0Mjk4YmJkZGU5ZmExMjgxOWE4MYUHHkw=: 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjdlMmVlODk3OWUyOGE3Y2RiZTRhMWFkMGFkYzA5MTE4MjU1OTQ2YTZiYzI0Mjk4YmJkZGU5ZmExMjgxOWE4MYUHHkw=: 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:11.923 16:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:12.181 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:12.181 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.181 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.438 nvme0n1 00:19:12.439 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.439 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:12.439 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.439 16:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.439 16:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDkwZTJlMDFiOGFjOWE2MWZlMDQ1YTM2ZGMwNjMyYmQ3+n1/: 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDkwZTJlMDFiOGFjOWE2MWZlMDQ1YTM2ZGMwNjMyYmQ3+n1/: 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: ]] 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.439 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.005 nvme0n1 00:19:13.005 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.005 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:13.005 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.005 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.005 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.005 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.005 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.005 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.005 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.005 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.264 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.264 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:13.264 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:19:13.264 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:13.264 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:13.264 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:13.264 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:13.264 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWNkM2QyNmE1MmFlMGNjNTJjOWVjYzY2OGIzNTNlNzU5NDM5ODQwN2ZmYzk2YzFjLphefg==: 00:19:13.264 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: 00:19:13.264 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:13.264 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:13.264 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWNkM2QyNmE1MmFlMGNjNTJjOWVjYzY2OGIzNTNlNzU5NDM5ODQwN2ZmYzk2YzFjLphefg==: 00:19:13.264 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: ]] 00:19:13.264 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: 00:19:13.264 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:19:13.264 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:13.264 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:13.264 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:13.264 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:13.264 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:13.264 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:13.264 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.264 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.264 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.264 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:13.264 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:13.264 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:13.264 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:13.264 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.264 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.264 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:13.264 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:13.264 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:13.264 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:13.264 16:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:13.264 16:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.264 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.264 16:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.829 nvme0n1 00:19:13.829 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.829 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.829 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.829 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.829 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:13.829 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.829 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.829 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.829 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.829 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.829 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.829 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:13.829 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:19:13.829 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:13.829 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:13.829 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:13.829 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:13.829 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2QwMTQyZjI2MjVjMzY1ZDFjYzY5OWU3MmMzN2JiMTVueiOL: 00:19:13.829 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: 00:19:13.829 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:13.829 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:13.830 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2QwMTQyZjI2MjVjMzY1ZDFjYzY5OWU3MmMzN2JiMTVueiOL: 00:19:13.830 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: ]] 00:19:13.830 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: 00:19:13.830 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:19:13.830 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:13.830 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:13.830 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:13.830 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:13.830 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:13.830 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:13.830 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.830 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.830 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.830 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:13.830 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:13.830 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:13.830 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:13.830 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.830 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.830 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:13.830 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:13.830 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:13.830 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:13.830 16:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:13.830 16:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.830 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.830 16:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.395 nvme0n1 00:19:14.395 16:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.395 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:14.395 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.395 16:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.396 16:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.396 16:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.396 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.396 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.396 16:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.396 16:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.396 16:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.396 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:14.396 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:19:14.396 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:14.396 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:14.396 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:14.396 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:14.396 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmFmNDZjNGNkZTQ0ZDczZmQ5ZTBlYmVjODcxMmQwOTI3NmEwZjljNTM2MmJiNGM4RARHUQ==: 00:19:14.396 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: 00:19:14.396 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:14.396 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:14.396 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmFmNDZjNGNkZTQ0ZDczZmQ5ZTBlYmVjODcxMmQwOTI3NmEwZjljNTM2MmJiNGM4RARHUQ==: 00:19:14.396 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: ]] 00:19:14.396 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: 00:19:14.396 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:19:14.396 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:14.396 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:14.396 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:14.396 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:14.396 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:14.396 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:14.396 16:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.396 16:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.654 16:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.654 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:14.654 16:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:14.654 16:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:14.654 16:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:14.654 16:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:14.654 16:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:14.654 16:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:14.654 16:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:14.654 16:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:14.654 16:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:14.654 16:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:14.654 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:14.654 16:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.654 16:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.222 nvme0n1 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjdlMmVlODk3OWUyOGE3Y2RiZTRhMWFkMGFkYzA5MTE4MjU1OTQ2YTZiYzI0Mjk4YmJkZGU5ZmExMjgxOWE4MYUHHkw=: 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjdlMmVlODk3OWUyOGE3Y2RiZTRhMWFkMGFkYzA5MTE4MjU1OTQ2YTZiYzI0Mjk4YmJkZGU5ZmExMjgxOWE4MYUHHkw=: 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:15.222 16:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:15.223 16:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:15.223 16:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:15.223 16:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:15.223 16:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:15.223 16:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.223 16:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.789 nvme0n1 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDkwZTJlMDFiOGFjOWE2MWZlMDQ1YTM2ZGMwNjMyYmQ3+n1/: 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDkwZTJlMDFiOGFjOWE2MWZlMDQ1YTM2ZGMwNjMyYmQ3+n1/: 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: ]] 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.789 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.048 nvme0n1 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWNkM2QyNmE1MmFlMGNjNTJjOWVjYzY2OGIzNTNlNzU5NDM5ODQwN2ZmYzk2YzFjLphefg==: 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWNkM2QyNmE1MmFlMGNjNTJjOWVjYzY2OGIzNTNlNzU5NDM5ODQwN2ZmYzk2YzFjLphefg==: 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: ]] 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.048 nvme0n1 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:16.048 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2QwMTQyZjI2MjVjMzY1ZDFjYzY5OWU3MmMzN2JiMTVueiOL: 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2QwMTQyZjI2MjVjMzY1ZDFjYzY5OWU3MmMzN2JiMTVueiOL: 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: ]] 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.307 nvme0n1 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:19:16.307 16:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:16.307 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:16.307 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:16.307 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:16.307 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmFmNDZjNGNkZTQ0ZDczZmQ5ZTBlYmVjODcxMmQwOTI3NmEwZjljNTM2MmJiNGM4RARHUQ==: 00:19:16.307 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: 00:19:16.307 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:16.307 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:16.307 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmFmNDZjNGNkZTQ0ZDczZmQ5ZTBlYmVjODcxMmQwOTI3NmEwZjljNTM2MmJiNGM4RARHUQ==: 00:19:16.307 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: ]] 00:19:16.307 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: 00:19:16.307 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:19:16.307 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:16.307 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:16.307 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:16.307 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:16.307 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:16.307 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:16.307 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.307 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.307 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.307 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:16.307 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:16.307 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:16.307 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:16.307 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:16.307 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:16.307 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:16.307 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:16.307 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:16.307 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:16.307 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:16.307 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:16.307 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.307 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.566 nvme0n1 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjdlMmVlODk3OWUyOGE3Y2RiZTRhMWFkMGFkYzA5MTE4MjU1OTQ2YTZiYzI0Mjk4YmJkZGU5ZmExMjgxOWE4MYUHHkw=: 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjdlMmVlODk3OWUyOGE3Y2RiZTRhMWFkMGFkYzA5MTE4MjU1OTQ2YTZiYzI0Mjk4YmJkZGU5ZmExMjgxOWE4MYUHHkw=: 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.566 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.824 nvme0n1 00:19:16.824 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.824 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.824 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.824 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.824 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:16.824 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.824 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.824 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.824 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.824 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.824 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.824 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:16.824 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:16.824 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:19:16.824 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:16.824 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:16.824 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:16.824 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:16.824 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDkwZTJlMDFiOGFjOWE2MWZlMDQ1YTM2ZGMwNjMyYmQ3+n1/: 00:19:16.824 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: 00:19:16.824 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:16.824 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:16.824 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDkwZTJlMDFiOGFjOWE2MWZlMDQ1YTM2ZGMwNjMyYmQ3+n1/: 00:19:16.824 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: ]] 00:19:16.824 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: 00:19:16.824 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:19:16.824 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:16.824 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:16.824 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:16.824 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:16.824 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:16.824 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:16.824 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.824 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.824 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.824 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:16.824 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:16.825 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:16.825 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:16.825 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:16.825 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:16.825 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:16.825 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:16.825 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:16.825 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:16.825 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:16.825 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.825 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.825 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.825 nvme0n1 00:19:16.825 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.825 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.825 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.825 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:16.825 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.825 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.082 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.082 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.082 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWNkM2QyNmE1MmFlMGNjNTJjOWVjYzY2OGIzNTNlNzU5NDM5ODQwN2ZmYzk2YzFjLphefg==: 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWNkM2QyNmE1MmFlMGNjNTJjOWVjYzY2OGIzNTNlNzU5NDM5ODQwN2ZmYzk2YzFjLphefg==: 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: ]] 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.083 nvme0n1 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2QwMTQyZjI2MjVjMzY1ZDFjYzY5OWU3MmMzN2JiMTVueiOL: 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2QwMTQyZjI2MjVjMzY1ZDFjYzY5OWU3MmMzN2JiMTVueiOL: 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: ]] 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.083 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.342 nvme0n1 00:19:17.342 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.342 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.342 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.342 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.342 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:17.342 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.342 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.342 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.342 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.342 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.342 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.342 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:17.342 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:19:17.342 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:17.342 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:17.342 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:17.342 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:17.342 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmFmNDZjNGNkZTQ0ZDczZmQ5ZTBlYmVjODcxMmQwOTI3NmEwZjljNTM2MmJiNGM4RARHUQ==: 00:19:17.342 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: 00:19:17.342 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:17.342 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:17.342 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmFmNDZjNGNkZTQ0ZDczZmQ5ZTBlYmVjODcxMmQwOTI3NmEwZjljNTM2MmJiNGM4RARHUQ==: 00:19:17.342 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: ]] 00:19:17.342 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: 00:19:17.342 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:19:17.342 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:17.342 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:17.342 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:17.342 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:17.342 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:17.342 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:17.342 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.342 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.342 16:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.342 16:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:17.342 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:17.342 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:17.342 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:17.342 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.342 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.342 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:17.342 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.342 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:17.342 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:17.342 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:17.342 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:17.342 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.342 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.600 nvme0n1 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjdlMmVlODk3OWUyOGE3Y2RiZTRhMWFkMGFkYzA5MTE4MjU1OTQ2YTZiYzI0Mjk4YmJkZGU5ZmExMjgxOWE4MYUHHkw=: 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjdlMmVlODk3OWUyOGE3Y2RiZTRhMWFkMGFkYzA5MTE4MjU1OTQ2YTZiYzI0Mjk4YmJkZGU5ZmExMjgxOWE4MYUHHkw=: 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.600 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:17.601 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:17.601 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:17.601 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:17.601 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.601 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.601 nvme0n1 00:19:17.601 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.601 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.601 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.601 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.601 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDkwZTJlMDFiOGFjOWE2MWZlMDQ1YTM2ZGMwNjMyYmQ3+n1/: 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDkwZTJlMDFiOGFjOWE2MWZlMDQ1YTM2ZGMwNjMyYmQ3+n1/: 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: ]] 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.860 nvme0n1 00:19:17.860 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.118 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:18.118 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.118 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.118 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.118 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.118 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.118 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.118 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.118 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.118 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.118 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:18.118 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:19:18.118 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:18.118 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:18.118 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:18.118 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:18.118 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWNkM2QyNmE1MmFlMGNjNTJjOWVjYzY2OGIzNTNlNzU5NDM5ODQwN2ZmYzk2YzFjLphefg==: 00:19:18.118 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: 00:19:18.119 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:18.119 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:18.119 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWNkM2QyNmE1MmFlMGNjNTJjOWVjYzY2OGIzNTNlNzU5NDM5ODQwN2ZmYzk2YzFjLphefg==: 00:19:18.119 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: ]] 00:19:18.119 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: 00:19:18.119 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:19:18.119 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:18.119 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:18.119 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:18.119 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:18.119 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:18.119 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:18.119 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.119 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.119 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.119 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:18.119 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:18.119 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:18.119 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:18.119 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:18.119 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:18.119 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:18.119 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:18.119 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:18.119 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:18.119 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:18.119 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.119 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.119 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.119 nvme0n1 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2QwMTQyZjI2MjVjMzY1ZDFjYzY5OWU3MmMzN2JiMTVueiOL: 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2QwMTQyZjI2MjVjMzY1ZDFjYzY5OWU3MmMzN2JiMTVueiOL: 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: ]] 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.377 16:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.634 nvme0n1 00:19:18.634 16:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.634 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.634 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:18.634 16:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.634 16:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.634 16:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.634 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.634 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.634 16:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.635 16:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.635 16:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.635 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:18.635 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:19:18.635 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:18.635 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:18.635 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:18.635 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:18.635 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmFmNDZjNGNkZTQ0ZDczZmQ5ZTBlYmVjODcxMmQwOTI3NmEwZjljNTM2MmJiNGM4RARHUQ==: 00:19:18.635 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: 00:19:18.635 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:18.635 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:18.635 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmFmNDZjNGNkZTQ0ZDczZmQ5ZTBlYmVjODcxMmQwOTI3NmEwZjljNTM2MmJiNGM4RARHUQ==: 00:19:18.635 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: ]] 00:19:18.635 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: 00:19:18.635 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:19:18.635 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:18.635 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:18.635 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:18.635 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:18.635 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:18.635 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:18.635 16:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.635 16:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.635 16:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.635 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:18.635 16:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:18.635 16:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:18.635 16:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:18.635 16:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:18.635 16:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:18.635 16:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:18.635 16:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:18.635 16:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:18.635 16:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:18.635 16:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:18.635 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:18.635 16:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.635 16:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.892 nvme0n1 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjdlMmVlODk3OWUyOGE3Y2RiZTRhMWFkMGFkYzA5MTE4MjU1OTQ2YTZiYzI0Mjk4YmJkZGU5ZmExMjgxOWE4MYUHHkw=: 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjdlMmVlODk3OWUyOGE3Y2RiZTRhMWFkMGFkYzA5MTE4MjU1OTQ2YTZiYzI0Mjk4YmJkZGU5ZmExMjgxOWE4MYUHHkw=: 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:18.892 16:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:18.893 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:18.893 16:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.893 16:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.150 nvme0n1 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDkwZTJlMDFiOGFjOWE2MWZlMDQ1YTM2ZGMwNjMyYmQ3+n1/: 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDkwZTJlMDFiOGFjOWE2MWZlMDQ1YTM2ZGMwNjMyYmQ3+n1/: 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: ]] 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.150 16:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.407 nvme0n1 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWNkM2QyNmE1MmFlMGNjNTJjOWVjYzY2OGIzNTNlNzU5NDM5ODQwN2ZmYzk2YzFjLphefg==: 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWNkM2QyNmE1MmFlMGNjNTJjOWVjYzY2OGIzNTNlNzU5NDM5ODQwN2ZmYzk2YzFjLphefg==: 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: ]] 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.407 16:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.970 nvme0n1 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2QwMTQyZjI2MjVjMzY1ZDFjYzY5OWU3MmMzN2JiMTVueiOL: 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2QwMTQyZjI2MjVjMzY1ZDFjYzY5OWU3MmMzN2JiMTVueiOL: 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: ]] 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:19.970 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:19.971 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:19.971 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:19.971 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.971 16:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.971 16:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.227 nvme0n1 00:19:20.227 16:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.227 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:20.227 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:20.227 16:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.227 16:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.227 16:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmFmNDZjNGNkZTQ0ZDczZmQ5ZTBlYmVjODcxMmQwOTI3NmEwZjljNTM2MmJiNGM4RARHUQ==: 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmFmNDZjNGNkZTQ0ZDczZmQ5ZTBlYmVjODcxMmQwOTI3NmEwZjljNTM2MmJiNGM4RARHUQ==: 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: ]] 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.228 16:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.793 nvme0n1 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjdlMmVlODk3OWUyOGE3Y2RiZTRhMWFkMGFkYzA5MTE4MjU1OTQ2YTZiYzI0Mjk4YmJkZGU5ZmExMjgxOWE4MYUHHkw=: 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjdlMmVlODk3OWUyOGE3Y2RiZTRhMWFkMGFkYzA5MTE4MjU1OTQ2YTZiYzI0Mjk4YmJkZGU5ZmExMjgxOWE4MYUHHkw=: 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.793 16:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.051 nvme0n1 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDkwZTJlMDFiOGFjOWE2MWZlMDQ1YTM2ZGMwNjMyYmQ3+n1/: 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDkwZTJlMDFiOGFjOWE2MWZlMDQ1YTM2ZGMwNjMyYmQ3+n1/: 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: ]] 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.051 16:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.631 nvme0n1 00:19:21.631 16:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.631 16:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:21.631 16:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:21.631 16:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.631 16:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.631 16:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWNkM2QyNmE1MmFlMGNjNTJjOWVjYzY2OGIzNTNlNzU5NDM5ODQwN2ZmYzk2YzFjLphefg==: 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWNkM2QyNmE1MmFlMGNjNTJjOWVjYzY2OGIzNTNlNzU5NDM5ODQwN2ZmYzk2YzFjLphefg==: 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: ]] 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.913 16:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.476 nvme0n1 00:19:22.476 16:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.477 16:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:22.477 16:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.477 16:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:22.477 16:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2QwMTQyZjI2MjVjMzY1ZDFjYzY5OWU3MmMzN2JiMTVueiOL: 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2QwMTQyZjI2MjVjMzY1ZDFjYzY5OWU3MmMzN2JiMTVueiOL: 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: ]] 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.477 16:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.043 nvme0n1 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmFmNDZjNGNkZTQ0ZDczZmQ5ZTBlYmVjODcxMmQwOTI3NmEwZjljNTM2MmJiNGM4RARHUQ==: 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmFmNDZjNGNkZTQ0ZDczZmQ5ZTBlYmVjODcxMmQwOTI3NmEwZjljNTM2MmJiNGM4RARHUQ==: 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: ]] 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.043 16:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.976 nvme0n1 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjdlMmVlODk3OWUyOGE3Y2RiZTRhMWFkMGFkYzA5MTE4MjU1OTQ2YTZiYzI0Mjk4YmJkZGU5ZmExMjgxOWE4MYUHHkw=: 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjdlMmVlODk3OWUyOGE3Y2RiZTRhMWFkMGFkYzA5MTE4MjU1OTQ2YTZiYzI0Mjk4YmJkZGU5ZmExMjgxOWE4MYUHHkw=: 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.977 16:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.544 nvme0n1 00:19:24.544 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.544 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:24.544 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.544 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.544 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:24.544 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.544 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.544 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:24.544 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.544 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.544 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.544 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:24.544 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:24.544 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:24.544 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:19:24.544 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:24.544 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:24.544 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:24.544 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:24.544 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDkwZTJlMDFiOGFjOWE2MWZlMDQ1YTM2ZGMwNjMyYmQ3+n1/: 00:19:24.544 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: 00:19:24.544 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:24.544 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:24.544 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDkwZTJlMDFiOGFjOWE2MWZlMDQ1YTM2ZGMwNjMyYmQ3+n1/: 00:19:24.544 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: ]] 00:19:24.544 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: 00:19:24.544 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:19:24.544 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:24.544 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:24.544 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:24.544 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:24.544 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:24.545 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:24.545 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.545 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.545 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.545 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:24.545 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:24.545 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:24.545 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:24.545 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:24.545 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:24.545 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:24.545 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:24.545 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:24.545 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:24.545 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:24.545 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.545 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.545 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.545 nvme0n1 00:19:24.545 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.545 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:24.545 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.545 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.545 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:24.545 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWNkM2QyNmE1MmFlMGNjNTJjOWVjYzY2OGIzNTNlNzU5NDM5ODQwN2ZmYzk2YzFjLphefg==: 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWNkM2QyNmE1MmFlMGNjNTJjOWVjYzY2OGIzNTNlNzU5NDM5ODQwN2ZmYzk2YzFjLphefg==: 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: ]] 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.803 nvme0n1 00:19:24.803 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2QwMTQyZjI2MjVjMzY1ZDFjYzY5OWU3MmMzN2JiMTVueiOL: 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2QwMTQyZjI2MjVjMzY1ZDFjYzY5OWU3MmMzN2JiMTVueiOL: 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: ]] 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.804 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.062 nvme0n1 00:19:25.062 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.062 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.062 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:25.062 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.062 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.062 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.062 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.062 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:25.062 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.062 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.062 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.062 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:25.062 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:19:25.062 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:25.062 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:25.062 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:25.062 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:25.062 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmFmNDZjNGNkZTQ0ZDczZmQ5ZTBlYmVjODcxMmQwOTI3NmEwZjljNTM2MmJiNGM4RARHUQ==: 00:19:25.062 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: 00:19:25.062 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:25.062 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:25.062 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmFmNDZjNGNkZTQ0ZDczZmQ5ZTBlYmVjODcxMmQwOTI3NmEwZjljNTM2MmJiNGM4RARHUQ==: 00:19:25.063 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: ]] 00:19:25.063 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: 00:19:25.063 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:19:25.063 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:25.063 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:25.063 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:25.063 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:25.063 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:25.063 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:25.063 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.063 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.063 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.063 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:25.063 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:25.063 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:25.063 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:25.063 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.063 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.063 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:25.063 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:25.063 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:25.063 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:25.063 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:25.063 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:25.063 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.063 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.321 nvme0n1 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjdlMmVlODk3OWUyOGE3Y2RiZTRhMWFkMGFkYzA5MTE4MjU1OTQ2YTZiYzI0Mjk4YmJkZGU5ZmExMjgxOWE4MYUHHkw=: 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjdlMmVlODk3OWUyOGE3Y2RiZTRhMWFkMGFkYzA5MTE4MjU1OTQ2YTZiYzI0Mjk4YmJkZGU5ZmExMjgxOWE4MYUHHkw=: 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.321 nvme0n1 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.321 16:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.321 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.321 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.321 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:25.321 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.321 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDkwZTJlMDFiOGFjOWE2MWZlMDQ1YTM2ZGMwNjMyYmQ3+n1/: 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDkwZTJlMDFiOGFjOWE2MWZlMDQ1YTM2ZGMwNjMyYmQ3+n1/: 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: ]] 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.579 nvme0n1 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWNkM2QyNmE1MmFlMGNjNTJjOWVjYzY2OGIzNTNlNzU5NDM5ODQwN2ZmYzk2YzFjLphefg==: 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWNkM2QyNmE1MmFlMGNjNTJjOWVjYzY2OGIzNTNlNzU5NDM5ODQwN2ZmYzk2YzFjLphefg==: 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: ]] 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.579 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.838 nvme0n1 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2QwMTQyZjI2MjVjMzY1ZDFjYzY5OWU3MmMzN2JiMTVueiOL: 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2QwMTQyZjI2MjVjMzY1ZDFjYzY5OWU3MmMzN2JiMTVueiOL: 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: ]] 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.838 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.096 nvme0n1 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmFmNDZjNGNkZTQ0ZDczZmQ5ZTBlYmVjODcxMmQwOTI3NmEwZjljNTM2MmJiNGM4RARHUQ==: 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmFmNDZjNGNkZTQ0ZDczZmQ5ZTBlYmVjODcxMmQwOTI3NmEwZjljNTM2MmJiNGM4RARHUQ==: 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: ]] 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.097 nvme0n1 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:26.097 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjdlMmVlODk3OWUyOGE3Y2RiZTRhMWFkMGFkYzA5MTE4MjU1OTQ2YTZiYzI0Mjk4YmJkZGU5ZmExMjgxOWE4MYUHHkw=: 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjdlMmVlODk3OWUyOGE3Y2RiZTRhMWFkMGFkYzA5MTE4MjU1OTQ2YTZiYzI0Mjk4YmJkZGU5ZmExMjgxOWE4MYUHHkw=: 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.356 16:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.356 nvme0n1 00:19:26.356 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.356 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:26.356 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:26.356 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.356 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.356 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.356 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.356 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:26.356 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.356 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDkwZTJlMDFiOGFjOWE2MWZlMDQ1YTM2ZGMwNjMyYmQ3+n1/: 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDkwZTJlMDFiOGFjOWE2MWZlMDQ1YTM2ZGMwNjMyYmQ3+n1/: 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: ]] 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.615 nvme0n1 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.615 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.873 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.873 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWNkM2QyNmE1MmFlMGNjNTJjOWVjYzY2OGIzNTNlNzU5NDM5ODQwN2ZmYzk2YzFjLphefg==: 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWNkM2QyNmE1MmFlMGNjNTJjOWVjYzY2OGIzNTNlNzU5NDM5ODQwN2ZmYzk2YzFjLphefg==: 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: ]] 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.874 nvme0n1 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:26.874 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2QwMTQyZjI2MjVjMzY1ZDFjYzY5OWU3MmMzN2JiMTVueiOL: 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2QwMTQyZjI2MjVjMzY1ZDFjYzY5OWU3MmMzN2JiMTVueiOL: 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: ]] 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.132 nvme0n1 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.132 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.390 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.390 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.390 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:27.390 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.390 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.390 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.390 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:27.390 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:19:27.390 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:27.390 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:27.390 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:27.390 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:27.390 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmFmNDZjNGNkZTQ0ZDczZmQ5ZTBlYmVjODcxMmQwOTI3NmEwZjljNTM2MmJiNGM4RARHUQ==: 00:19:27.390 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: 00:19:27.390 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:27.390 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:27.390 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmFmNDZjNGNkZTQ0ZDczZmQ5ZTBlYmVjODcxMmQwOTI3NmEwZjljNTM2MmJiNGM4RARHUQ==: 00:19:27.390 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: ]] 00:19:27.390 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: 00:19:27.391 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:19:27.391 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:27.391 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:27.391 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:27.391 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:27.391 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:27.391 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:27.391 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.391 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.391 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.391 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:27.391 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:27.391 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:27.391 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:27.391 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:27.391 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:27.391 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:27.391 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:27.391 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:27.391 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:27.391 16:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:27.391 16:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:27.391 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.391 16:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.391 nvme0n1 00:19:27.391 16:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.391 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:27.391 16:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.391 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:27.391 16:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.391 16:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.648 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.648 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:27.648 16:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.648 16:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.648 16:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.648 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:27.648 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:19:27.648 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:27.648 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:27.648 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:27.648 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:27.648 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjdlMmVlODk3OWUyOGE3Y2RiZTRhMWFkMGFkYzA5MTE4MjU1OTQ2YTZiYzI0Mjk4YmJkZGU5ZmExMjgxOWE4MYUHHkw=: 00:19:27.648 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:27.648 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:27.648 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:27.648 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjdlMmVlODk3OWUyOGE3Y2RiZTRhMWFkMGFkYzA5MTE4MjU1OTQ2YTZiYzI0Mjk4YmJkZGU5ZmExMjgxOWE4MYUHHkw=: 00:19:27.648 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:27.648 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:19:27.648 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:27.648 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:27.648 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:27.648 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:27.649 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:27.649 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:27.649 16:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.649 16:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.649 16:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.649 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:27.649 16:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:27.649 16:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:27.649 16:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:27.649 16:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:27.649 16:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:27.649 16:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:27.649 16:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:27.649 16:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:27.649 16:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:27.649 16:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:27.649 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:27.649 16:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.649 16:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.649 nvme0n1 00:19:27.649 16:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.649 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:27.649 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:27.649 16:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.649 16:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.906 16:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.906 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.906 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:27.906 16:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.906 16:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.906 16:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.906 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:27.906 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:27.906 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:19:27.906 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:27.906 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:27.906 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:27.906 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:27.906 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDkwZTJlMDFiOGFjOWE2MWZlMDQ1YTM2ZGMwNjMyYmQ3+n1/: 00:19:27.907 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: 00:19:27.907 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:27.907 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:27.907 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDkwZTJlMDFiOGFjOWE2MWZlMDQ1YTM2ZGMwNjMyYmQ3+n1/: 00:19:27.907 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: ]] 00:19:27.907 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: 00:19:27.907 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:19:27.907 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:27.907 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:27.907 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:27.907 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:27.907 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:27.907 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:27.907 16:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.907 16:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.907 16:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.907 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:27.907 16:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:27.907 16:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:27.907 16:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:27.907 16:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:27.907 16:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:27.907 16:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:27.907 16:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:27.907 16:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:27.907 16:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:27.907 16:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:27.907 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.907 16:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.907 16:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.164 nvme0n1 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWNkM2QyNmE1MmFlMGNjNTJjOWVjYzY2OGIzNTNlNzU5NDM5ODQwN2ZmYzk2YzFjLphefg==: 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWNkM2QyNmE1MmFlMGNjNTJjOWVjYzY2OGIzNTNlNzU5NDM5ODQwN2ZmYzk2YzFjLphefg==: 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: ]] 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:28.164 16:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:28.165 16:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:28.165 16:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:28.165 16:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:28.165 16:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.165 16:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.165 16:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.731 nvme0n1 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2QwMTQyZjI2MjVjMzY1ZDFjYzY5OWU3MmMzN2JiMTVueiOL: 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2QwMTQyZjI2MjVjMzY1ZDFjYzY5OWU3MmMzN2JiMTVueiOL: 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: ]] 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.731 16:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.989 nvme0n1 00:19:28.989 16:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.989 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:28.989 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:28.989 16:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.989 16:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.989 16:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.989 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmFmNDZjNGNkZTQ0ZDczZmQ5ZTBlYmVjODcxMmQwOTI3NmEwZjljNTM2MmJiNGM4RARHUQ==: 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmFmNDZjNGNkZTQ0ZDczZmQ5ZTBlYmVjODcxMmQwOTI3NmEwZjljNTM2MmJiNGM4RARHUQ==: 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: ]] 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.990 16:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.556 nvme0n1 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjdlMmVlODk3OWUyOGE3Y2RiZTRhMWFkMGFkYzA5MTE4MjU1OTQ2YTZiYzI0Mjk4YmJkZGU5ZmExMjgxOWE4MYUHHkw=: 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjdlMmVlODk3OWUyOGE3Y2RiZTRhMWFkMGFkYzA5MTE4MjU1OTQ2YTZiYzI0Mjk4YmJkZGU5ZmExMjgxOWE4MYUHHkw=: 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.556 16:05:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.814 nvme0n1 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDkwZTJlMDFiOGFjOWE2MWZlMDQ1YTM2ZGMwNjMyYmQ3+n1/: 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDkwZTJlMDFiOGFjOWE2MWZlMDQ1YTM2ZGMwNjMyYmQ3+n1/: 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: ]] 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk3YmViOTQ3OWZmNjAwNTM4NTUwOGM1NjBkZjFhYTQxNTE2ZTQ5OGZmYWNhNjhhNTc5MGIzZTRhNTI1NzgwMN3HG5A=: 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.814 16:05:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.815 16:05:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.380 nvme0n1 00:19:30.380 16:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.380 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:30.380 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:30.380 16:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.380 16:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.380 16:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWNkM2QyNmE1MmFlMGNjNTJjOWVjYzY2OGIzNTNlNzU5NDM5ODQwN2ZmYzk2YzFjLphefg==: 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWNkM2QyNmE1MmFlMGNjNTJjOWVjYzY2OGIzNTNlNzU5NDM5ODQwN2ZmYzk2YzFjLphefg==: 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: ]] 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.638 16:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.203 nvme0n1 00:19:31.203 16:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.203 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:31.203 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:31.203 16:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.203 16:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.203 16:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.203 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.203 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:31.203 16:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.203 16:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.203 16:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.203 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:31.203 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:19:31.203 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:31.203 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:31.203 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:31.203 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:31.203 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2QwMTQyZjI2MjVjMzY1ZDFjYzY5OWU3MmMzN2JiMTVueiOL: 00:19:31.203 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: 00:19:31.203 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:31.203 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:31.204 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2QwMTQyZjI2MjVjMzY1ZDFjYzY5OWU3MmMzN2JiMTVueiOL: 00:19:31.204 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: ]] 00:19:31.204 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2FlMmFhYzlhZmY1ZDYxZWEwYjA5MmZhYjM5Mjg4NGZ1VAYN: 00:19:31.204 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:19:31.204 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:31.204 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:31.204 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:31.204 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:31.204 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:31.204 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:31.204 16:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.204 16:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.204 16:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.204 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:31.204 16:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:31.204 16:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:31.204 16:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:31.204 16:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:31.204 16:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:31.204 16:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:31.204 16:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:31.204 16:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:31.204 16:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:31.204 16:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:31.204 16:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.204 16:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.204 16:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.770 nvme0n1 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmFmNDZjNGNkZTQ0ZDczZmQ5ZTBlYmVjODcxMmQwOTI3NmEwZjljNTM2MmJiNGM4RARHUQ==: 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmFmNDZjNGNkZTQ0ZDczZmQ5ZTBlYmVjODcxMmQwOTI3NmEwZjljNTM2MmJiNGM4RARHUQ==: 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: ]] 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzU4ODFhMTc0OTAwNjNjZWI2NDg0YTRiZWZkY2QxNmHVBBUe: 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.770 16:05:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.336 nvme0n1 00:19:32.336 16:05:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.336 16:05:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:32.336 16:05:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:32.336 16:05:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.337 16:05:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.337 16:05:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjdlMmVlODk3OWUyOGE3Y2RiZTRhMWFkMGFkYzA5MTE4MjU1OTQ2YTZiYzI0Mjk4YmJkZGU5ZmExMjgxOWE4MYUHHkw=: 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjdlMmVlODk3OWUyOGE3Y2RiZTRhMWFkMGFkYzA5MTE4MjU1OTQ2YTZiYzI0Mjk4YmJkZGU5ZmExMjgxOWE4MYUHHkw=: 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.337 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.920 nvme0n1 00:19:32.920 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.920 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:32.920 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:32.920 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.920 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.920 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWNkM2QyNmE1MmFlMGNjNTJjOWVjYzY2OGIzNTNlNzU5NDM5ODQwN2ZmYzk2YzFjLphefg==: 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWNkM2QyNmE1MmFlMGNjNTJjOWVjYzY2OGIzNTNlNzU5NDM5ODQwN2ZmYzk2YzFjLphefg==: 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: ]] 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk2NjgxMmYyODllMzBlZDYzOWRmMWIxMTk2NmFkOTQ3MzRlMDliYWIzZTM1Mjgz8wMRWQ==: 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.179 2024/07/15 16:05:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:33.179 request: 00:19:33.179 { 00:19:33.179 "method": "bdev_nvme_attach_controller", 00:19:33.179 "params": { 00:19:33.179 "name": "nvme0", 00:19:33.179 "trtype": "tcp", 00:19:33.179 "traddr": "10.0.0.1", 00:19:33.179 "adrfam": "ipv4", 00:19:33.179 "trsvcid": "4420", 00:19:33.179 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:33.179 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:33.179 "prchk_reftag": false, 00:19:33.179 "prchk_guard": false, 00:19:33.179 "hdgst": false, 00:19:33.179 "ddgst": false 00:19:33.179 } 00:19:33.179 } 00:19:33.179 Got JSON-RPC error response 00:19:33.179 GoRPCClient: error on JSON-RPC call 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.179 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.179 2024/07/15 16:05:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:33.179 request: 00:19:33.179 { 00:19:33.179 "method": "bdev_nvme_attach_controller", 00:19:33.179 "params": { 00:19:33.179 "name": "nvme0", 00:19:33.179 "trtype": "tcp", 00:19:33.179 "traddr": "10.0.0.1", 00:19:33.179 "adrfam": "ipv4", 00:19:33.179 "trsvcid": "4420", 00:19:33.179 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:33.179 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:33.179 "prchk_reftag": false, 00:19:33.179 "prchk_guard": false, 00:19:33.179 "hdgst": false, 00:19:33.180 "ddgst": false, 00:19:33.180 "dhchap_key": "key2" 00:19:33.180 } 00:19:33.180 } 00:19:33.180 Got JSON-RPC error response 00:19:33.180 GoRPCClient: error on JSON-RPC call 00:19:33.180 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:33.180 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:19:33.180 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:33.180 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:33.180 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:33.180 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:19:33.180 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:19:33.180 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.180 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.180 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.180 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:19:33.180 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:19:33.180 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:33.180 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:33.180 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:33.180 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:33.180 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:33.180 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:33.180 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:33.180 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:33.180 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:33.180 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:33.180 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:33.180 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:19:33.180 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:33.180 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:33.438 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:33.438 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:33.438 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:33.438 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:33.438 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.438 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.438 2024/07/15 16:05:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:33.438 request: 00:19:33.439 { 00:19:33.439 "method": "bdev_nvme_attach_controller", 00:19:33.439 "params": { 00:19:33.439 "name": "nvme0", 00:19:33.439 "trtype": "tcp", 00:19:33.439 "traddr": "10.0.0.1", 00:19:33.439 "adrfam": "ipv4", 00:19:33.439 "trsvcid": "4420", 00:19:33.439 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:33.439 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:33.439 "prchk_reftag": false, 00:19:33.439 "prchk_guard": false, 00:19:33.439 "hdgst": false, 00:19:33.439 "ddgst": false, 00:19:33.439 "dhchap_key": "key1", 00:19:33.439 "dhchap_ctrlr_key": "ckey2" 00:19:33.439 } 00:19:33.439 } 00:19:33.439 Got JSON-RPC error response 00:19:33.439 GoRPCClient: error on JSON-RPC call 00:19:33.439 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:33.439 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:19:33.439 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:33.439 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:33.439 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:33.439 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:19:33.439 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:19:33.439 16:05:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:19:33.439 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:33.439 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:19:33.439 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:33.439 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:19:33.439 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:33.439 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:33.439 rmmod nvme_tcp 00:19:33.439 rmmod nvme_fabrics 00:19:33.439 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:33.439 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:19:33.439 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:19:33.439 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 91997 ']' 00:19:33.439 16:05:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 91997 00:19:33.439 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 91997 ']' 00:19:33.439 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 91997 00:19:33.439 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:19:33.439 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:33.439 16:05:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91997 00:19:33.439 killing process with pid 91997 00:19:33.439 16:05:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:33.439 16:05:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:33.439 16:05:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91997' 00:19:33.439 16:05:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 91997 00:19:33.439 16:05:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 91997 00:19:33.697 16:05:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:33.697 16:05:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:33.697 16:05:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:33.697 16:05:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:33.697 16:05:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:33.697 16:05:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.697 16:05:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:33.697 16:05:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.697 16:05:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:33.697 16:05:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:33.697 16:05:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:33.697 16:05:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:19:33.697 16:05:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:19:33.697 16:05:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:19:33.697 16:05:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:33.697 16:05:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:33.697 16:05:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:33.697 16:05:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:33.697 16:05:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:19:33.697 16:05:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:19:33.697 16:05:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:34.264 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:34.522 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:34.522 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:34.522 16:05:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Sr5 /tmp/spdk.key-null.hle /tmp/spdk.key-sha256.uAK /tmp/spdk.key-sha384.U1H /tmp/spdk.key-sha512.ZFg /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:19:34.522 16:05:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:34.800 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:34.800 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:34.800 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:35.059 ************************************ 00:19:35.059 END TEST nvmf_auth_host 00:19:35.059 ************************************ 00:19:35.059 00:19:35.059 real 0m34.775s 00:19:35.059 user 0m31.679s 00:19:35.059 sys 0m3.725s 00:19:35.059 16:05:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:35.059 16:05:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.059 16:05:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:35.059 16:05:28 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:19:35.059 16:05:28 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:35.059 16:05:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:35.059 16:05:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:35.059 16:05:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:35.059 ************************************ 00:19:35.059 START TEST nvmf_digest 00:19:35.059 ************************************ 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:35.059 * Looking for test storage... 00:19:35.059 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:35.059 Cannot find device "nvmf_tgt_br" 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:35.059 Cannot find device "nvmf_tgt_br2" 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:35.059 Cannot find device "nvmf_tgt_br" 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:19:35.059 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:35.060 Cannot find device "nvmf_tgt_br2" 00:19:35.060 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:19:35.060 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:35.318 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:35.318 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:35.318 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:35.318 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:19:35.318 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:35.318 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:35.318 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:19:35.318 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:35.318 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:35.318 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:35.318 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:35.318 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:35.318 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:35.318 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:35.318 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:35.318 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:35.318 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:35.318 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:35.318 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:35.318 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:35.318 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:35.318 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:35.318 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:35.318 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:35.318 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:35.318 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:35.318 16:05:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:35.318 16:05:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:35.318 16:05:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:35.318 16:05:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:35.318 16:05:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:35.318 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:35.318 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:19:35.318 00:19:35.318 --- 10.0.0.2 ping statistics --- 00:19:35.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.318 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:19:35.318 16:05:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:35.318 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:35.319 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:19:35.319 00:19:35.319 --- 10.0.0.3 ping statistics --- 00:19:35.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.319 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:19:35.319 16:05:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:35.319 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:35.319 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:19:35.319 00:19:35.319 --- 10.0.0.1 ping statistics --- 00:19:35.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.319 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:19:35.319 16:05:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:35.319 16:05:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:19:35.319 16:05:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:35.319 16:05:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:35.319 16:05:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:35.319 16:05:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:35.319 16:05:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:35.319 16:05:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:35.319 16:05:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:35.577 16:05:29 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:35.577 16:05:29 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:19:35.577 16:05:29 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:19:35.577 16:05:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:35.577 16:05:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:35.577 16:05:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:35.577 ************************************ 00:19:35.577 START TEST nvmf_digest_clean 00:19:35.577 ************************************ 00:19:35.577 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:19:35.577 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:19:35.577 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:19:35.577 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:19:35.577 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:19:35.577 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:19:35.577 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:35.577 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:35.577 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:35.577 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=93589 00:19:35.577 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 93589 00:19:35.577 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:35.577 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93589 ']' 00:19:35.577 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.577 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:35.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.577 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.577 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:35.577 16:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:35.577 [2024-07-15 16:05:29.132885] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:19:35.577 [2024-07-15 16:05:29.133002] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:35.577 [2024-07-15 16:05:29.270669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.835 [2024-07-15 16:05:29.388988] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:35.835 [2024-07-15 16:05:29.389064] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:35.835 [2024-07-15 16:05:29.389089] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:35.835 [2024-07-15 16:05:29.389100] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:35.835 [2024-07-15 16:05:29.389108] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:35.835 [2024-07-15 16:05:29.389144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.401 16:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:36.401 16:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:19:36.401 16:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:36.401 16:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:36.401 16:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:36.659 16:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.659 16:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:19:36.659 16:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:19:36.659 16:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:19:36.659 16:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.659 16:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:36.659 null0 00:19:36.659 [2024-07-15 16:05:30.278982] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.659 [2024-07-15 16:05:30.303075] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:36.659 16:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.659 16:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:19:36.659 16:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:36.659 16:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:36.659 16:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:19:36.659 16:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:19:36.659 16:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:19:36.659 16:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:36.659 16:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93639 00:19:36.659 16:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:36.659 16:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93639 /var/tmp/bperf.sock 00:19:36.659 16:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93639 ']' 00:19:36.659 16:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:36.659 16:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:36.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:36.659 16:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:36.659 16:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:36.659 16:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:36.659 [2024-07-15 16:05:30.359836] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:19:36.659 [2024-07-15 16:05:30.359947] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93639 ] 00:19:36.918 [2024-07-15 16:05:30.491247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.918 [2024-07-15 16:05:30.611428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.851 16:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:37.851 16:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:19:37.851 16:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:37.851 16:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:37.851 16:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:38.108 16:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:38.108 16:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:38.366 nvme0n1 00:19:38.366 16:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:38.366 16:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:38.622 Running I/O for 2 seconds... 00:19:40.518 00:19:40.518 Latency(us) 00:19:40.518 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.518 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:40.518 nvme0n1 : 2.00 18295.72 71.47 0.00 0.00 6987.90 3902.37 14834.97 00:19:40.518 =================================================================================================================== 00:19:40.518 Total : 18295.72 71.47 0.00 0.00 6987.90 3902.37 14834.97 00:19:40.518 0 00:19:40.518 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:40.518 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:40.518 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:40.518 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:40.518 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:40.518 | select(.opcode=="crc32c") 00:19:40.518 | "\(.module_name) \(.executed)"' 00:19:40.776 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:40.776 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:40.776 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:40.776 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:40.776 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93639 00:19:40.776 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93639 ']' 00:19:40.776 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93639 00:19:40.776 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:19:40.776 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:40.776 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93639 00:19:40.776 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:40.776 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:40.776 killing process with pid 93639 00:19:40.776 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93639' 00:19:40.776 Received shutdown signal, test time was about 2.000000 seconds 00:19:40.776 00:19:40.776 Latency(us) 00:19:40.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.776 =================================================================================================================== 00:19:40.776 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:40.776 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93639 00:19:40.776 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93639 00:19:41.033 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:19:41.033 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:41.033 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:41.033 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:19:41.034 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:19:41.034 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:19:41.034 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:41.034 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93725 00:19:41.034 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93725 /var/tmp/bperf.sock 00:19:41.034 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:19:41.034 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93725 ']' 00:19:41.034 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:41.034 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:41.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:41.034 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:41.034 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:41.034 16:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:41.292 [2024-07-15 16:05:34.771940] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:19:41.292 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:41.292 Zero copy mechanism will not be used. 00:19:41.292 [2024-07-15 16:05:34.772796] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93725 ] 00:19:41.292 [2024-07-15 16:05:34.911019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.550 [2024-07-15 16:05:35.031202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:42.180 16:05:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:42.180 16:05:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:19:42.180 16:05:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:42.180 16:05:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:42.180 16:05:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:42.437 16:05:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:42.437 16:05:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:42.695 nvme0n1 00:19:42.954 16:05:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:42.954 16:05:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:42.954 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:42.954 Zero copy mechanism will not be used. 00:19:42.954 Running I/O for 2 seconds... 00:19:44.853 00:19:44.853 Latency(us) 00:19:44.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.853 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:19:44.853 nvme0n1 : 2.00 7653.55 956.69 0.00 0.00 2086.47 647.91 10783.65 00:19:44.853 =================================================================================================================== 00:19:44.853 Total : 7653.55 956.69 0.00 0.00 2086.47 647.91 10783.65 00:19:44.853 0 00:19:44.853 16:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:44.853 16:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:44.853 16:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:44.853 16:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:44.853 16:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:44.853 | select(.opcode=="crc32c") 00:19:44.853 | "\(.module_name) \(.executed)"' 00:19:45.112 16:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:45.112 16:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:45.112 16:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:45.112 16:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:45.112 16:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93725 00:19:45.112 16:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93725 ']' 00:19:45.112 16:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93725 00:19:45.112 16:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:19:45.112 16:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:45.112 16:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93725 00:19:45.371 16:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:45.371 16:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:45.371 killing process with pid 93725 00:19:45.371 16:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93725' 00:19:45.371 Received shutdown signal, test time was about 2.000000 seconds 00:19:45.371 00:19:45.371 Latency(us) 00:19:45.371 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.371 =================================================================================================================== 00:19:45.371 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:45.371 16:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93725 00:19:45.371 16:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93725 00:19:45.371 16:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:19:45.371 16:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:45.371 16:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:45.371 16:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:19:45.371 16:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:19:45.371 16:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:19:45.371 16:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:45.371 16:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93814 00:19:45.371 16:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:45.371 16:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93814 /var/tmp/bperf.sock 00:19:45.371 16:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93814 ']' 00:19:45.371 16:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:45.371 16:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:45.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:45.372 16:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:45.372 16:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:45.372 16:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:45.630 [2024-07-15 16:05:39.139003] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:19:45.630 [2024-07-15 16:05:39.139152] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93814 ] 00:19:45.630 [2024-07-15 16:05:39.271000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.889 [2024-07-15 16:05:39.392427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.455 16:05:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:46.455 16:05:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:19:46.455 16:05:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:46.455 16:05:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:46.455 16:05:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:47.021 16:05:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:47.021 16:05:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:47.021 nvme0n1 00:19:47.021 16:05:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:47.021 16:05:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:47.280 Running I/O for 2 seconds... 00:19:49.180 00:19:49.180 Latency(us) 00:19:49.180 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.180 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:49.180 nvme0n1 : 2.00 23554.37 92.01 0.00 0.00 5426.20 2591.65 9592.09 00:19:49.180 =================================================================================================================== 00:19:49.180 Total : 23554.37 92.01 0.00 0.00 5426.20 2591.65 9592.09 00:19:49.180 0 00:19:49.180 16:05:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:49.180 16:05:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:49.180 16:05:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:49.180 16:05:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:49.180 16:05:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:49.180 | select(.opcode=="crc32c") 00:19:49.180 | "\(.module_name) \(.executed)"' 00:19:49.437 16:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:49.437 16:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:49.437 16:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:49.437 16:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:49.437 16:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93814 00:19:49.437 16:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93814 ']' 00:19:49.437 16:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93814 00:19:49.437 16:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:19:49.437 16:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:49.437 16:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93814 00:19:49.694 16:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:49.694 16:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:49.694 killing process with pid 93814 00:19:49.694 16:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93814' 00:19:49.694 Received shutdown signal, test time was about 2.000000 seconds 00:19:49.694 00:19:49.694 Latency(us) 00:19:49.694 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.694 =================================================================================================================== 00:19:49.694 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:49.694 16:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93814 00:19:49.694 16:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93814 00:19:49.694 16:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:19:49.694 16:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:49.694 16:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:49.694 16:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:19:49.694 16:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:19:49.694 16:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:19:49.694 16:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:49.694 16:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93906 00:19:49.694 16:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93906 /var/tmp/bperf.sock 00:19:49.694 16:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93906 ']' 00:19:49.694 16:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:19:49.694 16:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:49.694 16:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:49.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:49.694 16:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:49.694 16:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:49.694 16:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:49.952 [2024-07-15 16:05:43.455505] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:19:49.952 [2024-07-15 16:05:43.455628] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93906 ] 00:19:49.952 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:49.952 Zero copy mechanism will not be used. 00:19:49.952 [2024-07-15 16:05:43.590131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.210 [2024-07-15 16:05:43.680991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:50.775 16:05:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:50.775 16:05:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:19:50.775 16:05:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:50.775 16:05:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:50.775 16:05:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:51.365 16:05:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:51.365 16:05:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:51.626 nvme0n1 00:19:51.626 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:51.626 16:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:51.626 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:51.626 Zero copy mechanism will not be used. 00:19:51.626 Running I/O for 2 seconds... 00:19:53.526 00:19:53.526 Latency(us) 00:19:53.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.526 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:19:53.526 nvme0n1 : 2.00 6348.82 793.60 0.00 0.00 2514.40 1980.97 7923.90 00:19:53.526 =================================================================================================================== 00:19:53.526 Total : 6348.82 793.60 0.00 0.00 2514.40 1980.97 7923.90 00:19:53.526 0 00:19:53.784 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:53.784 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:53.784 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:53.784 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:53.784 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:53.784 | select(.opcode=="crc32c") 00:19:53.784 | "\(.module_name) \(.executed)"' 00:19:54.043 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:54.043 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:54.043 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:54.043 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:54.043 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93906 00:19:54.043 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93906 ']' 00:19:54.043 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93906 00:19:54.043 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:19:54.043 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:54.043 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93906 00:19:54.043 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:54.043 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:54.043 killing process with pid 93906 00:19:54.043 Received shutdown signal, test time was about 2.000000 seconds 00:19:54.043 00:19:54.043 Latency(us) 00:19:54.043 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.043 =================================================================================================================== 00:19:54.043 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:54.043 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93906' 00:19:54.043 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93906 00:19:54.043 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93906 00:19:54.302 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 93589 00:19:54.302 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93589 ']' 00:19:54.302 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93589 00:19:54.302 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:19:54.302 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:54.302 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93589 00:19:54.302 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:54.302 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:54.302 killing process with pid 93589 00:19:54.302 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93589' 00:19:54.302 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93589 00:19:54.302 16:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93589 00:19:54.302 00:19:54.302 real 0m18.955s 00:19:54.302 user 0m36.434s 00:19:54.302 sys 0m4.602s 00:19:54.302 16:05:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:54.302 16:05:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:54.302 ************************************ 00:19:54.302 END TEST nvmf_digest_clean 00:19:54.302 ************************************ 00:19:54.560 16:05:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:19:54.560 16:05:48 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:19:54.560 16:05:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:54.560 16:05:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:54.560 16:05:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:54.560 ************************************ 00:19:54.560 START TEST nvmf_digest_error 00:19:54.560 ************************************ 00:19:54.560 16:05:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:19:54.560 16:05:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:19:54.560 16:05:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:54.560 16:05:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:54.560 16:05:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:54.560 16:05:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=94019 00:19:54.560 16:05:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 94019 00:19:54.560 16:05:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 94019 ']' 00:19:54.560 16:05:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.560 16:05:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:54.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.560 16:05:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:54.560 16:05:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.560 16:05:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:54.560 16:05:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:54.560 [2024-07-15 16:05:48.137576] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:19:54.560 [2024-07-15 16:05:48.137685] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:54.560 [2024-07-15 16:05:48.271400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.819 [2024-07-15 16:05:48.380003] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:54.819 [2024-07-15 16:05:48.380069] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:54.819 [2024-07-15 16:05:48.380080] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:54.819 [2024-07-15 16:05:48.380089] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:54.819 [2024-07-15 16:05:48.380096] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:54.819 [2024-07-15 16:05:48.380121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.753 16:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:55.753 16:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:19:55.753 16:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:55.753 16:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:55.753 16:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:55.753 16:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:55.753 16:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:19:55.753 16:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.753 16:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:55.753 [2024-07-15 16:05:49.156651] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:19:55.753 16:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.753 16:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:19:55.753 16:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:19:55.753 16:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.753 16:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:55.753 null0 00:19:55.753 [2024-07-15 16:05:49.271049] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:55.753 [2024-07-15 16:05:49.295140] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.753 16:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.753 16:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:19:55.753 16:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:55.753 16:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:19:55.753 16:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:19:55.753 16:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:19:55.753 16:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94063 00:19:55.753 16:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94063 /var/tmp/bperf.sock 00:19:55.753 16:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:19:55.753 16:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 94063 ']' 00:19:55.753 16:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:55.753 16:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:55.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:55.753 16:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:55.753 16:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:55.753 16:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:55.753 [2024-07-15 16:05:49.354559] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:19:55.753 [2024-07-15 16:05:49.354658] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94063 ] 00:19:56.012 [2024-07-15 16:05:49.492735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.012 [2024-07-15 16:05:49.595892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:56.971 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:56.971 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:19:56.971 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:56.971 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:56.971 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:56.971 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.971 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:56.971 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.971 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:56.971 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:57.229 nvme0n1 00:19:57.486 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:19:57.486 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.486 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:57.486 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.486 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:57.486 16:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:57.486 Running I/O for 2 seconds... 00:19:57.486 [2024-07-15 16:05:51.091186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:57.486 [2024-07-15 16:05:51.091239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.486 [2024-07-15 16:05:51.091254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.486 [2024-07-15 16:05:51.103876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:57.486 [2024-07-15 16:05:51.103916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.486 [2024-07-15 16:05:51.103929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.486 [2024-07-15 16:05:51.115318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:57.486 [2024-07-15 16:05:51.115369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.486 [2024-07-15 16:05:51.115383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.486 [2024-07-15 16:05:51.130125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:57.486 [2024-07-15 16:05:51.130163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.486 [2024-07-15 16:05:51.130176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.486 [2024-07-15 16:05:51.144892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:57.486 [2024-07-15 16:05:51.144942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.486 [2024-07-15 16:05:51.144966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.486 [2024-07-15 16:05:51.159767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:57.486 [2024-07-15 16:05:51.159805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.487 [2024-07-15 16:05:51.159818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.487 [2024-07-15 16:05:51.172134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:57.487 [2024-07-15 16:05:51.172171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.487 [2024-07-15 16:05:51.172185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.487 [2024-07-15 16:05:51.188146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:57.487 [2024-07-15 16:05:51.188185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.487 [2024-07-15 16:05:51.188198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.487 [2024-07-15 16:05:51.201359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:57.487 [2024-07-15 16:05:51.201398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.487 [2024-07-15 16:05:51.201411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.744 [2024-07-15 16:05:51.215011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:57.744 [2024-07-15 16:05:51.215044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.744 [2024-07-15 16:05:51.215057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.744 [2024-07-15 16:05:51.228306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:57.744 [2024-07-15 16:05:51.228340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.744 [2024-07-15 16:05:51.228354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.744 [2024-07-15 16:05:51.241431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:57.744 [2024-07-15 16:05:51.241465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.744 [2024-07-15 16:05:51.241478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.744 [2024-07-15 16:05:51.255840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:57.744 [2024-07-15 16:05:51.255873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.744 [2024-07-15 16:05:51.255887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.744 [2024-07-15 16:05:51.269766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:57.744 [2024-07-15 16:05:51.269800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.745 [2024-07-15 16:05:51.269813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.745 [2024-07-15 16:05:51.283946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:57.745 [2024-07-15 16:05:51.283990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.745 [2024-07-15 16:05:51.284003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.745 [2024-07-15 16:05:51.298175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:57.745 [2024-07-15 16:05:51.298207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.745 [2024-07-15 16:05:51.298220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.745 [2024-07-15 16:05:51.309971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:57.745 [2024-07-15 16:05:51.310002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.745 [2024-07-15 16:05:51.310016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.745 [2024-07-15 16:05:51.324149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:57.745 [2024-07-15 16:05:51.324197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.745 [2024-07-15 16:05:51.324210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.745 [2024-07-15 16:05:51.337872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:57.745 [2024-07-15 16:05:51.337930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.745 [2024-07-15 16:05:51.337944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.745 [2024-07-15 16:05:51.351494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:57.745 [2024-07-15 16:05:51.351526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.745 [2024-07-15 16:05:51.351539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.745 [2024-07-15 16:05:51.365008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:57.745 [2024-07-15 16:05:51.365040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.745 [2024-07-15 16:05:51.365054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.745 [2024-07-15 16:05:51.375870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:57.745 [2024-07-15 16:05:51.375903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.745 [2024-07-15 16:05:51.375916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.745 [2024-07-15 16:05:51.391001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:57.745 [2024-07-15 16:05:51.391041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.745 [2024-07-15 16:05:51.391054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.745 [2024-07-15 16:05:51.405447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:57.745 [2024-07-15 16:05:51.405488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.745 [2024-07-15 16:05:51.405501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.745 [2024-07-15 16:05:51.419713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:57.745 [2024-07-15 16:05:51.419753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.745 [2024-07-15 16:05:51.419766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.745 [2024-07-15 16:05:51.433040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:57.745 [2024-07-15 16:05:51.433079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.745 [2024-07-15 16:05:51.433093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.745 [2024-07-15 16:05:51.445070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:57.745 [2024-07-15 16:05:51.445109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.745 [2024-07-15 16:05:51.445122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.745 [2024-07-15 16:05:51.461615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:57.745 [2024-07-15 16:05:51.461659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.745 [2024-07-15 16:05:51.461672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.003 [2024-07-15 16:05:51.476292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.003 [2024-07-15 16:05:51.476331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.003 [2024-07-15 16:05:51.476345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.003 [2024-07-15 16:05:51.491032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.003 [2024-07-15 16:05:51.491076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.003 [2024-07-15 16:05:51.491089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.003 [2024-07-15 16:05:51.504792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.003 [2024-07-15 16:05:51.504836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.003 [2024-07-15 16:05:51.504850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.003 [2024-07-15 16:05:51.517688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.003 [2024-07-15 16:05:51.517733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.003 [2024-07-15 16:05:51.517746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.003 [2024-07-15 16:05:51.532867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.003 [2024-07-15 16:05:51.532911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.003 [2024-07-15 16:05:51.532941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.003 [2024-07-15 16:05:51.545220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.003 [2024-07-15 16:05:51.545263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.003 [2024-07-15 16:05:51.545278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.003 [2024-07-15 16:05:51.559665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.003 [2024-07-15 16:05:51.559724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.003 [2024-07-15 16:05:51.559754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.003 [2024-07-15 16:05:51.573398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.003 [2024-07-15 16:05:51.573450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.003 [2024-07-15 16:05:51.573464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.003 [2024-07-15 16:05:51.586455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.003 [2024-07-15 16:05:51.586498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.003 [2024-07-15 16:05:51.586511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.003 [2024-07-15 16:05:51.599539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.004 [2024-07-15 16:05:51.599584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.004 [2024-07-15 16:05:51.599598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.004 [2024-07-15 16:05:51.614037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.004 [2024-07-15 16:05:51.614082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.004 [2024-07-15 16:05:51.614095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.004 [2024-07-15 16:05:51.626139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.004 [2024-07-15 16:05:51.626183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.004 [2024-07-15 16:05:51.626197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.004 [2024-07-15 16:05:51.640350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.004 [2024-07-15 16:05:51.640394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.004 [2024-07-15 16:05:51.640407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.004 [2024-07-15 16:05:51.655639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.004 [2024-07-15 16:05:51.655682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.004 [2024-07-15 16:05:51.655696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.004 [2024-07-15 16:05:51.669409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.004 [2024-07-15 16:05:51.669452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.004 [2024-07-15 16:05:51.669466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.004 [2024-07-15 16:05:51.682837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.004 [2024-07-15 16:05:51.682881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.004 [2024-07-15 16:05:51.682895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.004 [2024-07-15 16:05:51.697117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.004 [2024-07-15 16:05:51.697160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.004 [2024-07-15 16:05:51.697174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.004 [2024-07-15 16:05:51.710694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.004 [2024-07-15 16:05:51.710738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.004 [2024-07-15 16:05:51.710751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.004 [2024-07-15 16:05:51.724037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.004 [2024-07-15 16:05:51.724079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.004 [2024-07-15 16:05:51.724098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.263 [2024-07-15 16:05:51.739300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.263 [2024-07-15 16:05:51.739345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-15 16:05:51.739358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.263 [2024-07-15 16:05:51.753681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.263 [2024-07-15 16:05:51.753724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-15 16:05:51.753738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.263 [2024-07-15 16:05:51.765326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.263 [2024-07-15 16:05:51.765368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-15 16:05:51.765398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.263 [2024-07-15 16:05:51.779409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.263 [2024-07-15 16:05:51.779452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-15 16:05:51.779466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.263 [2024-07-15 16:05:51.793240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.263 [2024-07-15 16:05:51.793283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-15 16:05:51.793297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.263 [2024-07-15 16:05:51.808573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.263 [2024-07-15 16:05:51.808617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-15 16:05:51.808631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.263 [2024-07-15 16:05:51.822497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.263 [2024-07-15 16:05:51.822541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-15 16:05:51.822554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.263 [2024-07-15 16:05:51.837863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.263 [2024-07-15 16:05:51.837943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-15 16:05:51.837970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.263 [2024-07-15 16:05:51.853051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.263 [2024-07-15 16:05:51.853108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-15 16:05:51.853123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.263 [2024-07-15 16:05:51.864995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.263 [2024-07-15 16:05:51.865079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-15 16:05:51.865094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.263 [2024-07-15 16:05:51.880407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.263 [2024-07-15 16:05:51.880478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-15 16:05:51.880492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.263 [2024-07-15 16:05:51.894995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.263 [2024-07-15 16:05:51.895057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-15 16:05:51.895088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.263 [2024-07-15 16:05:51.910206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.263 [2024-07-15 16:05:51.910280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-15 16:05:51.910295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.263 [2024-07-15 16:05:51.925414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.263 [2024-07-15 16:05:51.925499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-15 16:05:51.925531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.263 [2024-07-15 16:05:51.938606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.263 [2024-07-15 16:05:51.938658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-15 16:05:51.938691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.263 [2024-07-15 16:05:51.952687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.263 [2024-07-15 16:05:51.952746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-15 16:05:51.952760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.263 [2024-07-15 16:05:51.966400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.263 [2024-07-15 16:05:51.966487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-15 16:05:51.966517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.263 [2024-07-15 16:05:51.982142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.263 [2024-07-15 16:05:51.982218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.263 [2024-07-15 16:05:51.982233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.522 [2024-07-15 16:05:51.994368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.522 [2024-07-15 16:05:51.994439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.522 [2024-07-15 16:05:51.994470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.522 [2024-07-15 16:05:52.009977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.522 [2024-07-15 16:05:52.010036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.522 [2024-07-15 16:05:52.010051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.522 [2024-07-15 16:05:52.024414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.522 [2024-07-15 16:05:52.024471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.522 [2024-07-15 16:05:52.024485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.522 [2024-07-15 16:05:52.039377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.522 [2024-07-15 16:05:52.039440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.522 [2024-07-15 16:05:52.039457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.522 [2024-07-15 16:05:52.053704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.522 [2024-07-15 16:05:52.053794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.522 [2024-07-15 16:05:52.053810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.522 [2024-07-15 16:05:52.068540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.522 [2024-07-15 16:05:52.068599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.522 [2024-07-15 16:05:52.068632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.522 [2024-07-15 16:05:52.083241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.522 [2024-07-15 16:05:52.083337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.522 [2024-07-15 16:05:52.083352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.522 [2024-07-15 16:05:52.097013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.522 [2024-07-15 16:05:52.097099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.522 [2024-07-15 16:05:52.097115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.522 [2024-07-15 16:05:52.110817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.522 [2024-07-15 16:05:52.110904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.522 [2024-07-15 16:05:52.110929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.522 [2024-07-15 16:05:52.127496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.522 [2024-07-15 16:05:52.127566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.522 [2024-07-15 16:05:52.127583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.522 [2024-07-15 16:05:52.140696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.522 [2024-07-15 16:05:52.140767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.522 [2024-07-15 16:05:52.140793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.522 [2024-07-15 16:05:52.156325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.522 [2024-07-15 16:05:52.156406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.522 [2024-07-15 16:05:52.156421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.522 [2024-07-15 16:05:52.171209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.522 [2024-07-15 16:05:52.171283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.522 [2024-07-15 16:05:52.171299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.522 [2024-07-15 16:05:52.186067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.522 [2024-07-15 16:05:52.186125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.523 [2024-07-15 16:05:52.186141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.523 [2024-07-15 16:05:52.200354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.523 [2024-07-15 16:05:52.200413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.523 [2024-07-15 16:05:52.200428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.523 [2024-07-15 16:05:52.213259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.523 [2024-07-15 16:05:52.213315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.523 [2024-07-15 16:05:52.213330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.523 [2024-07-15 16:05:52.229180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.523 [2024-07-15 16:05:52.229251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.523 [2024-07-15 16:05:52.229266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.523 [2024-07-15 16:05:52.243896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.523 [2024-07-15 16:05:52.243968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.523 [2024-07-15 16:05:52.243985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.782 [2024-07-15 16:05:52.256618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.782 [2024-07-15 16:05:52.256675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-15 16:05:52.256690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.782 [2024-07-15 16:05:52.271692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.782 [2024-07-15 16:05:52.271767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-15 16:05:52.271783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.782 [2024-07-15 16:05:52.284113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.782 [2024-07-15 16:05:52.284183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-15 16:05:52.284215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.782 [2024-07-15 16:05:52.300850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.782 [2024-07-15 16:05:52.300909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-15 16:05:52.300942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.782 [2024-07-15 16:05:52.314617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.782 [2024-07-15 16:05:52.314687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-15 16:05:52.314702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.782 [2024-07-15 16:05:52.330191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.782 [2024-07-15 16:05:52.330249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-15 16:05:52.330264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.782 [2024-07-15 16:05:52.346689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.782 [2024-07-15 16:05:52.346745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-15 16:05:52.346777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.782 [2024-07-15 16:05:52.363606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.782 [2024-07-15 16:05:52.363695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-15 16:05:52.363727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.782 [2024-07-15 16:05:52.378336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.782 [2024-07-15 16:05:52.378434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-15 16:05:52.378449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.782 [2024-07-15 16:05:52.390863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.782 [2024-07-15 16:05:52.390913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-15 16:05:52.390944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.782 [2024-07-15 16:05:52.405543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.782 [2024-07-15 16:05:52.405597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-15 16:05:52.405611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.782 [2024-07-15 16:05:52.420389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.782 [2024-07-15 16:05:52.420449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-15 16:05:52.420481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.782 [2024-07-15 16:05:52.434453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.782 [2024-07-15 16:05:52.434546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-15 16:05:52.434577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.782 [2024-07-15 16:05:52.449164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.782 [2024-07-15 16:05:52.449221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-15 16:05:52.449236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.782 [2024-07-15 16:05:52.462917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.782 [2024-07-15 16:05:52.463013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-15 16:05:52.463028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.782 [2024-07-15 16:05:52.477514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.782 [2024-07-15 16:05:52.477578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-15 16:05:52.477593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.782 [2024-07-15 16:05:52.490299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.782 [2024-07-15 16:05:52.490360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-15 16:05:52.490375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.782 [2024-07-15 16:05:52.504446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:58.782 [2024-07-15 16:05:52.504497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.782 [2024-07-15 16:05:52.504512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.041 [2024-07-15 16:05:52.516104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:59.041 [2024-07-15 16:05:52.516164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.041 [2024-07-15 16:05:52.516178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.041 [2024-07-15 16:05:52.531451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:59.042 [2024-07-15 16:05:52.531510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.042 [2024-07-15 16:05:52.531540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.042 [2024-07-15 16:05:52.545699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:59.042 [2024-07-15 16:05:52.545769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.042 [2024-07-15 16:05:52.545800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.042 [2024-07-15 16:05:52.559775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:59.042 [2024-07-15 16:05:52.559837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.042 [2024-07-15 16:05:52.559851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.042 [2024-07-15 16:05:52.574831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:59.042 [2024-07-15 16:05:52.574903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.042 [2024-07-15 16:05:52.574920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.042 [2024-07-15 16:05:52.589044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:59.042 [2024-07-15 16:05:52.589102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.042 [2024-07-15 16:05:52.589117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.042 [2024-07-15 16:05:52.601022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:59.042 [2024-07-15 16:05:52.601078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.042 [2024-07-15 16:05:52.601093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.042 [2024-07-15 16:05:52.614675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:59.042 [2024-07-15 16:05:52.614737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.042 [2024-07-15 16:05:52.614752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.042 [2024-07-15 16:05:52.628290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:59.042 [2024-07-15 16:05:52.628355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.042 [2024-07-15 16:05:52.628369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.042 [2024-07-15 16:05:52.642994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:59.042 [2024-07-15 16:05:52.643068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.042 [2024-07-15 16:05:52.643083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.042 [2024-07-15 16:05:52.655622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:59.042 [2024-07-15 16:05:52.655682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.042 [2024-07-15 16:05:52.655697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.042 [2024-07-15 16:05:52.670319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:59.042 [2024-07-15 16:05:52.670407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.042 [2024-07-15 16:05:52.670422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.042 [2024-07-15 16:05:52.685163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:59.042 [2024-07-15 16:05:52.685220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.042 [2024-07-15 16:05:52.685236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.042 [2024-07-15 16:05:52.699387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:59.042 [2024-07-15 16:05:52.699449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.042 [2024-07-15 16:05:52.699464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.042 [2024-07-15 16:05:52.713255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:59.042 [2024-07-15 16:05:52.713336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.042 [2024-07-15 16:05:52.713352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.042 [2024-07-15 16:05:52.726107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:59.042 [2024-07-15 16:05:52.726177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.042 [2024-07-15 16:05:52.726193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.042 [2024-07-15 16:05:52.741842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:59.042 [2024-07-15 16:05:52.741930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.042 [2024-07-15 16:05:52.741946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.042 [2024-07-15 16:05:52.753797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:59.042 [2024-07-15 16:05:52.753857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.042 [2024-07-15 16:05:52.753873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.042 [2024-07-15 16:05:52.767867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:59.042 [2024-07-15 16:05:52.767948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.042 [2024-07-15 16:05:52.767986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.301 [2024-07-15 16:05:52.782688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:59.301 [2024-07-15 16:05:52.782753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.301 [2024-07-15 16:05:52.782767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.301 [2024-07-15 16:05:52.796853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:59.301 [2024-07-15 16:05:52.796920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.301 [2024-07-15 16:05:52.796951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.301 [2024-07-15 16:05:52.810028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:59.301 [2024-07-15 16:05:52.810091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.301 [2024-07-15 16:05:52.810107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.301 [2024-07-15 16:05:52.824899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:59.301 [2024-07-15 16:05:52.824981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.301 [2024-07-15 16:05:52.824997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.301 [2024-07-15 16:05:52.837557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:59.301 [2024-07-15 16:05:52.837616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.301 [2024-07-15 16:05:52.837631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.301 [2024-07-15 16:05:52.852423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:59.301 [2024-07-15 16:05:52.852482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.301 [2024-07-15 16:05:52.852496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.301 [2024-07-15 16:05:52.866072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:59.301 [2024-07-15 16:05:52.866129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.302 [2024-07-15 16:05:52.866144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.302 [2024-07-15 16:05:52.881029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:59.302 [2024-07-15 16:05:52.881087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.302 [2024-07-15 16:05:52.881103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.302 [2024-07-15 16:05:52.897906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:59.302 [2024-07-15 16:05:52.897998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.302 [2024-07-15 16:05:52.898013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.302 [2024-07-15 16:05:52.918057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:59.302 [2024-07-15 16:05:52.918127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.302 [2024-07-15 16:05:52.918143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.302 [2024-07-15 16:05:52.933412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:59.302 [2024-07-15 16:05:52.933488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.302 [2024-07-15 16:05:52.933502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.302 [2024-07-15 16:05:52.952475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:59.302 [2024-07-15 16:05:52.952547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.302 [2024-07-15 16:05:52.952562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.302 [2024-07-15 16:05:52.970740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:59.302 [2024-07-15 16:05:52.970807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.302 [2024-07-15 16:05:52.970821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.302 [2024-07-15 16:05:52.986439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:59.302 [2024-07-15 16:05:52.986518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.302 [2024-07-15 16:05:52.986532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.302 [2024-07-15 16:05:53.008348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:59.302 [2024-07-15 16:05:53.008434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.302 [2024-07-15 16:05:53.008464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.302 [2024-07-15 16:05:53.027099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:59.302 [2024-07-15 16:05:53.027160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.302 [2024-07-15 16:05:53.027174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.560 [2024-07-15 16:05:53.046029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:59.560 [2024-07-15 16:05:53.046092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.560 [2024-07-15 16:05:53.046107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.561 [2024-07-15 16:05:53.061008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16493e0) 00:19:59.561 [2024-07-15 16:05:53.061069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.561 [2024-07-15 16:05:53.061099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.561 00:19:59.561 Latency(us) 00:19:59.561 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.561 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:59.561 nvme0n1 : 2.00 17669.15 69.02 0.00 0.00 7234.72 3038.49 26452.71 00:19:59.561 =================================================================================================================== 00:19:59.561 Total : 17669.15 69.02 0.00 0.00 7234.72 3038.49 26452.71 00:19:59.561 0 00:19:59.561 16:05:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:59.561 16:05:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:59.561 16:05:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:59.561 16:05:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:59.561 | .driver_specific 00:19:59.561 | .nvme_error 00:19:59.561 | .status_code 00:19:59.561 | .command_transient_transport_error' 00:19:59.868 16:05:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 138 > 0 )) 00:19:59.868 16:05:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94063 00:19:59.868 16:05:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 94063 ']' 00:19:59.868 16:05:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 94063 00:19:59.868 16:05:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:19:59.869 16:05:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:59.869 16:05:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94063 00:19:59.869 16:05:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:59.869 16:05:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:59.869 killing process with pid 94063 00:19:59.869 16:05:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94063' 00:19:59.869 16:05:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 94063 00:19:59.869 Received shutdown signal, test time was about 2.000000 seconds 00:19:59.869 00:19:59.869 Latency(us) 00:19:59.869 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.869 =================================================================================================================== 00:19:59.869 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:59.869 16:05:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 94063 00:20:00.127 16:05:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:20:00.127 16:05:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:00.127 16:05:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:00.127 16:05:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:00.127 16:05:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:00.127 16:05:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94154 00:20:00.127 16:05:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:20:00.127 16:05:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94154 /var/tmp/bperf.sock 00:20:00.127 16:05:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 94154 ']' 00:20:00.127 16:05:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:00.127 16:05:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:00.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:00.127 16:05:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:00.127 16:05:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:00.127 16:05:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:00.127 [2024-07-15 16:05:53.665564] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:20:00.127 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:00.127 Zero copy mechanism will not be used. 00:20:00.127 [2024-07-15 16:05:53.665669] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94154 ] 00:20:00.127 [2024-07-15 16:05:53.800857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.385 [2024-07-15 16:05:53.920048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.320 16:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:01.320 16:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:01.320 16:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:01.320 16:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:01.320 16:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:01.320 16:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.320 16:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:01.320 16:05:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.320 16:05:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:01.320 16:05:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:01.578 nvme0n1 00:20:01.836 16:05:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:01.836 16:05:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.836 16:05:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:01.836 16:05:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.836 16:05:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:01.836 16:05:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:01.836 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:01.836 Zero copy mechanism will not be used. 00:20:01.836 Running I/O for 2 seconds... 00:20:01.836 [2024-07-15 16:05:55.436055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:01.836 [2024-07-15 16:05:55.436147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.836 [2024-07-15 16:05:55.436162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.836 [2024-07-15 16:05:55.441247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:01.836 [2024-07-15 16:05:55.441290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.836 [2024-07-15 16:05:55.441304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.836 [2024-07-15 16:05:55.445641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:01.836 [2024-07-15 16:05:55.445699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.836 [2024-07-15 16:05:55.445712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.836 [2024-07-15 16:05:55.449829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:01.836 [2024-07-15 16:05:55.449873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.836 [2024-07-15 16:05:55.449887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.836 [2024-07-15 16:05:55.454054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:01.836 [2024-07-15 16:05:55.454098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.836 [2024-07-15 16:05:55.454112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.836 [2024-07-15 16:05:55.458139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:01.836 [2024-07-15 16:05:55.458190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.836 [2024-07-15 16:05:55.458203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.836 [2024-07-15 16:05:55.462586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:01.836 [2024-07-15 16:05:55.462631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.836 [2024-07-15 16:05:55.462644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.836 [2024-07-15 16:05:55.466673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:01.836 [2024-07-15 16:05:55.466745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.836 [2024-07-15 16:05:55.466758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.836 [2024-07-15 16:05:55.470929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:01.836 [2024-07-15 16:05:55.470987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.836 [2024-07-15 16:05:55.471001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.836 [2024-07-15 16:05:55.474702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:01.836 [2024-07-15 16:05:55.474747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.836 [2024-07-15 16:05:55.474760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.836 [2024-07-15 16:05:55.479332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:01.836 [2024-07-15 16:05:55.479375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.836 [2024-07-15 16:05:55.479389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.836 [2024-07-15 16:05:55.484346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:01.836 [2024-07-15 16:05:55.484390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.837 [2024-07-15 16:05:55.484403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.837 [2024-07-15 16:05:55.487629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:01.837 [2024-07-15 16:05:55.487673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.837 [2024-07-15 16:05:55.487686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.837 [2024-07-15 16:05:55.491977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:01.837 [2024-07-15 16:05:55.492033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.837 [2024-07-15 16:05:55.492047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.837 [2024-07-15 16:05:55.497034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:01.837 [2024-07-15 16:05:55.497073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.837 [2024-07-15 16:05:55.497086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.837 [2024-07-15 16:05:55.500469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:01.837 [2024-07-15 16:05:55.500512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.837 [2024-07-15 16:05:55.500525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.837 [2024-07-15 16:05:55.504609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:01.837 [2024-07-15 16:05:55.504653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.837 [2024-07-15 16:05:55.504666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.837 [2024-07-15 16:05:55.509441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:01.837 [2024-07-15 16:05:55.509499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.837 [2024-07-15 16:05:55.509513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.837 [2024-07-15 16:05:55.512794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:01.837 [2024-07-15 16:05:55.512836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.837 [2024-07-15 16:05:55.512849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.837 [2024-07-15 16:05:55.517437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:01.837 [2024-07-15 16:05:55.517485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.837 [2024-07-15 16:05:55.517500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.837 [2024-07-15 16:05:55.521362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:01.837 [2024-07-15 16:05:55.521407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.837 [2024-07-15 16:05:55.521420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.837 [2024-07-15 16:05:55.524695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:01.837 [2024-07-15 16:05:55.524740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.837 [2024-07-15 16:05:55.524754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.837 [2024-07-15 16:05:55.529064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:01.837 [2024-07-15 16:05:55.529106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.837 [2024-07-15 16:05:55.529134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.837 [2024-07-15 16:05:55.532752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:01.837 [2024-07-15 16:05:55.532797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.837 [2024-07-15 16:05:55.532810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.837 [2024-07-15 16:05:55.536593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:01.837 [2024-07-15 16:05:55.536637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.837 [2024-07-15 16:05:55.536650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.837 [2024-07-15 16:05:55.541120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:01.837 [2024-07-15 16:05:55.541163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.837 [2024-07-15 16:05:55.541177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.837 [2024-07-15 16:05:55.544874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:01.837 [2024-07-15 16:05:55.544918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.837 [2024-07-15 16:05:55.544932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.837 [2024-07-15 16:05:55.548716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:01.837 [2024-07-15 16:05:55.548761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.837 [2024-07-15 16:05:55.548774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.837 [2024-07-15 16:05:55.552705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:01.837 [2024-07-15 16:05:55.552749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.837 [2024-07-15 16:05:55.552763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.837 [2024-07-15 16:05:55.556683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:01.837 [2024-07-15 16:05:55.556728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.837 [2024-07-15 16:05:55.556757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.837 [2024-07-15 16:05:55.560633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:01.837 [2024-07-15 16:05:55.560677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.837 [2024-07-15 16:05:55.560691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.096 [2024-07-15 16:05:55.564557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.096 [2024-07-15 16:05:55.564602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.096 [2024-07-15 16:05:55.564616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.096 [2024-07-15 16:05:55.568786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.096 [2024-07-15 16:05:55.568831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.096 [2024-07-15 16:05:55.568846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.096 [2024-07-15 16:05:55.572316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.096 [2024-07-15 16:05:55.572361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.096 [2024-07-15 16:05:55.572374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.096 [2024-07-15 16:05:55.576712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.096 [2024-07-15 16:05:55.576759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.096 [2024-07-15 16:05:55.576773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.096 [2024-07-15 16:05:55.580920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.096 [2024-07-15 16:05:55.580992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.096 [2024-07-15 16:05:55.581008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.096 [2024-07-15 16:05:55.585029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.096 [2024-07-15 16:05:55.585071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.096 [2024-07-15 16:05:55.585084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.096 [2024-07-15 16:05:55.588669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.096 [2024-07-15 16:05:55.588729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.096 [2024-07-15 16:05:55.588742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.096 [2024-07-15 16:05:55.592823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.096 [2024-07-15 16:05:55.592869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.096 [2024-07-15 16:05:55.592883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.096 [2024-07-15 16:05:55.596755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.096 [2024-07-15 16:05:55.596799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.096 [2024-07-15 16:05:55.596812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.096 [2024-07-15 16:05:55.601165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.096 [2024-07-15 16:05:55.601209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.096 [2024-07-15 16:05:55.601223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.096 [2024-07-15 16:05:55.605205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.096 [2024-07-15 16:05:55.605250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.096 [2024-07-15 16:05:55.605264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.096 [2024-07-15 16:05:55.608357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.096 [2024-07-15 16:05:55.608402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.097 [2024-07-15 16:05:55.608415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.097 [2024-07-15 16:05:55.612623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.097 [2024-07-15 16:05:55.612669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.097 [2024-07-15 16:05:55.612683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.097 [2024-07-15 16:05:55.616566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.097 [2024-07-15 16:05:55.616609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.097 [2024-07-15 16:05:55.616622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.097 [2024-07-15 16:05:55.620748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.097 [2024-07-15 16:05:55.620799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.097 [2024-07-15 16:05:55.620812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.097 [2024-07-15 16:05:55.624422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.097 [2024-07-15 16:05:55.624464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.097 [2024-07-15 16:05:55.624476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.097 [2024-07-15 16:05:55.628781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.097 [2024-07-15 16:05:55.628823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.097 [2024-07-15 16:05:55.628836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.097 [2024-07-15 16:05:55.633594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.097 [2024-07-15 16:05:55.633639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.097 [2024-07-15 16:05:55.633653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.097 [2024-07-15 16:05:55.636895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.097 [2024-07-15 16:05:55.636938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.097 [2024-07-15 16:05:55.636951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.097 [2024-07-15 16:05:55.641460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.097 [2024-07-15 16:05:55.641505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.097 [2024-07-15 16:05:55.641519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.097 [2024-07-15 16:05:55.646765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.097 [2024-07-15 16:05:55.646809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.097 [2024-07-15 16:05:55.646822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.097 [2024-07-15 16:05:55.652021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.097 [2024-07-15 16:05:55.652064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.097 [2024-07-15 16:05:55.652078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.097 [2024-07-15 16:05:55.655845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.097 [2024-07-15 16:05:55.655888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.097 [2024-07-15 16:05:55.655900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.097 [2024-07-15 16:05:55.659292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.097 [2024-07-15 16:05:55.659350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.097 [2024-07-15 16:05:55.659365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.097 [2024-07-15 16:05:55.664668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.097 [2024-07-15 16:05:55.664714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.097 [2024-07-15 16:05:55.664728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.097 [2024-07-15 16:05:55.669992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.097 [2024-07-15 16:05:55.670036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.097 [2024-07-15 16:05:55.670050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.097 [2024-07-15 16:05:55.674886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.097 [2024-07-15 16:05:55.674930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.097 [2024-07-15 16:05:55.674944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.097 [2024-07-15 16:05:55.678174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.097 [2024-07-15 16:05:55.678219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.097 [2024-07-15 16:05:55.678232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.097 [2024-07-15 16:05:55.682541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.097 [2024-07-15 16:05:55.682588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.097 [2024-07-15 16:05:55.682602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.097 [2024-07-15 16:05:55.686536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.097 [2024-07-15 16:05:55.686597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.097 [2024-07-15 16:05:55.686617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.097 [2024-07-15 16:05:55.690782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.097 [2024-07-15 16:05:55.690823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.097 [2024-07-15 16:05:55.690836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.097 [2024-07-15 16:05:55.694531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.097 [2024-07-15 16:05:55.694575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.097 [2024-07-15 16:05:55.694588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.097 [2024-07-15 16:05:55.698005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.097 [2024-07-15 16:05:55.698050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.097 [2024-07-15 16:05:55.698064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.097 [2024-07-15 16:05:55.702150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.097 [2024-07-15 16:05:55.702196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.097 [2024-07-15 16:05:55.702210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.097 [2024-07-15 16:05:55.706559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.097 [2024-07-15 16:05:55.706601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.097 [2024-07-15 16:05:55.706614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.097 [2024-07-15 16:05:55.710249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.097 [2024-07-15 16:05:55.710294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.097 [2024-07-15 16:05:55.710307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.098 [2024-07-15 16:05:55.715001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.098 [2024-07-15 16:05:55.715059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.098 [2024-07-15 16:05:55.715073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.098 [2024-07-15 16:05:55.718898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.098 [2024-07-15 16:05:55.718941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.098 [2024-07-15 16:05:55.718953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.098 [2024-07-15 16:05:55.722547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.098 [2024-07-15 16:05:55.722591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.098 [2024-07-15 16:05:55.722604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.098 [2024-07-15 16:05:55.726750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.098 [2024-07-15 16:05:55.726793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.098 [2024-07-15 16:05:55.726805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.098 [2024-07-15 16:05:55.730165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.098 [2024-07-15 16:05:55.730209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.098 [2024-07-15 16:05:55.730222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.098 [2024-07-15 16:05:55.734256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.098 [2024-07-15 16:05:55.734299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.098 [2024-07-15 16:05:55.734311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.098 [2024-07-15 16:05:55.738936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.098 [2024-07-15 16:05:55.738990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.098 [2024-07-15 16:05:55.739004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.098 [2024-07-15 16:05:55.742617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.098 [2024-07-15 16:05:55.742658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.098 [2024-07-15 16:05:55.742671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.098 [2024-07-15 16:05:55.747339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.098 [2024-07-15 16:05:55.747379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.098 [2024-07-15 16:05:55.747392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.098 [2024-07-15 16:05:55.751272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.098 [2024-07-15 16:05:55.751315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.098 [2024-07-15 16:05:55.751328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.098 [2024-07-15 16:05:55.754748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.098 [2024-07-15 16:05:55.754792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.098 [2024-07-15 16:05:55.754805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.098 [2024-07-15 16:05:55.759402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.098 [2024-07-15 16:05:55.759447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.098 [2024-07-15 16:05:55.759461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.098 [2024-07-15 16:05:55.764418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.098 [2024-07-15 16:05:55.764464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.098 [2024-07-15 16:05:55.764477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.098 [2024-07-15 16:05:55.767640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.098 [2024-07-15 16:05:55.767684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.098 [2024-07-15 16:05:55.767697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.098 [2024-07-15 16:05:55.771932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.098 [2024-07-15 16:05:55.771988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.098 [2024-07-15 16:05:55.772002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.098 [2024-07-15 16:05:55.776113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.098 [2024-07-15 16:05:55.776155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.098 [2024-07-15 16:05:55.776168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.098 [2024-07-15 16:05:55.780445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.098 [2024-07-15 16:05:55.780489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.098 [2024-07-15 16:05:55.780502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.098 [2024-07-15 16:05:55.784238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.098 [2024-07-15 16:05:55.784284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.098 [2024-07-15 16:05:55.784297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.098 [2024-07-15 16:05:55.788582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.098 [2024-07-15 16:05:55.788627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.098 [2024-07-15 16:05:55.788640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.098 [2024-07-15 16:05:55.792921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.098 [2024-07-15 16:05:55.792976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.098 [2024-07-15 16:05:55.792991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.098 [2024-07-15 16:05:55.796951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.098 [2024-07-15 16:05:55.797001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.098 [2024-07-15 16:05:55.797015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.098 [2024-07-15 16:05:55.801000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.098 [2024-07-15 16:05:55.801042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.098 [2024-07-15 16:05:55.801055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.098 [2024-07-15 16:05:55.805343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.098 [2024-07-15 16:05:55.805389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.098 [2024-07-15 16:05:55.805402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.098 [2024-07-15 16:05:55.808857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.098 [2024-07-15 16:05:55.808917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.098 [2024-07-15 16:05:55.808930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.098 [2024-07-15 16:05:55.813553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.099 [2024-07-15 16:05:55.813615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.099 [2024-07-15 16:05:55.813645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.099 [2024-07-15 16:05:55.817717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.099 [2024-07-15 16:05:55.817759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.099 [2024-07-15 16:05:55.817772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.099 [2024-07-15 16:05:55.821446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.099 [2024-07-15 16:05:55.821487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.099 [2024-07-15 16:05:55.821516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.359 [2024-07-15 16:05:55.826864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.359 [2024-07-15 16:05:55.826913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.359 [2024-07-15 16:05:55.826934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.359 [2024-07-15 16:05:55.832219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.359 [2024-07-15 16:05:55.832261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.359 [2024-07-15 16:05:55.832275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.359 [2024-07-15 16:05:55.835767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.359 [2024-07-15 16:05:55.835809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.359 [2024-07-15 16:05:55.835822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.359 [2024-07-15 16:05:55.840392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.359 [2024-07-15 16:05:55.840437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.359 [2024-07-15 16:05:55.840451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.359 [2024-07-15 16:05:55.845510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.359 [2024-07-15 16:05:55.845554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.359 [2024-07-15 16:05:55.845567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.359 [2024-07-15 16:05:55.850076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.359 [2024-07-15 16:05:55.850119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.359 [2024-07-15 16:05:55.850133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.359 [2024-07-15 16:05:55.853486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.359 [2024-07-15 16:05:55.853525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.359 [2024-07-15 16:05:55.853539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.359 [2024-07-15 16:05:55.857941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.359 [2024-07-15 16:05:55.857997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.359 [2024-07-15 16:05:55.858011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.359 [2024-07-15 16:05:55.861460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.359 [2024-07-15 16:05:55.861498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.359 [2024-07-15 16:05:55.861511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.359 [2024-07-15 16:05:55.865785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.359 [2024-07-15 16:05:55.865833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.359 [2024-07-15 16:05:55.865847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.359 [2024-07-15 16:05:55.869910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.359 [2024-07-15 16:05:55.869948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.359 [2024-07-15 16:05:55.869976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.359 [2024-07-15 16:05:55.873837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.359 [2024-07-15 16:05:55.873876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.359 [2024-07-15 16:05:55.873890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.359 [2024-07-15 16:05:55.877611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.359 [2024-07-15 16:05:55.877650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.359 [2024-07-15 16:05:55.877664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.359 [2024-07-15 16:05:55.881976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.359 [2024-07-15 16:05:55.882019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.359 [2024-07-15 16:05:55.882033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.359 [2024-07-15 16:05:55.885826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.359 [2024-07-15 16:05:55.885866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.359 [2024-07-15 16:05:55.885880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.359 [2024-07-15 16:05:55.889803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.359 [2024-07-15 16:05:55.889841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.359 [2024-07-15 16:05:55.889854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.359 [2024-07-15 16:05:55.894401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.359 [2024-07-15 16:05:55.894443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.359 [2024-07-15 16:05:55.894456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.359 [2024-07-15 16:05:55.898720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.359 [2024-07-15 16:05:55.898766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.359 [2024-07-15 16:05:55.898780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.359 [2024-07-15 16:05:55.902584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.359 [2024-07-15 16:05:55.902647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.359 [2024-07-15 16:05:55.902660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.359 [2024-07-15 16:05:55.906522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.359 [2024-07-15 16:05:55.906564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.359 [2024-07-15 16:05:55.906578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.359 [2024-07-15 16:05:55.910874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.359 [2024-07-15 16:05:55.910917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.359 [2024-07-15 16:05:55.910930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.359 [2024-07-15 16:05:55.914079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.359 [2024-07-15 16:05:55.914123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.359 [2024-07-15 16:05:55.914136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.359 [2024-07-15 16:05:55.918921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.359 [2024-07-15 16:05:55.918978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.359 [2024-07-15 16:05:55.918993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.359 [2024-07-15 16:05:55.923172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.360 [2024-07-15 16:05:55.923216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.360 [2024-07-15 16:05:55.923230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.360 [2024-07-15 16:05:55.926903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.360 [2024-07-15 16:05:55.926950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.360 [2024-07-15 16:05:55.926978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.360 [2024-07-15 16:05:55.931257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.360 [2024-07-15 16:05:55.931298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.360 [2024-07-15 16:05:55.931312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.360 [2024-07-15 16:05:55.935371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.360 [2024-07-15 16:05:55.935416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.360 [2024-07-15 16:05:55.935430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.360 [2024-07-15 16:05:55.939249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.360 [2024-07-15 16:05:55.939293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.360 [2024-07-15 16:05:55.939307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.360 [2024-07-15 16:05:55.943804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.360 [2024-07-15 16:05:55.943850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.360 [2024-07-15 16:05:55.943864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.360 [2024-07-15 16:05:55.948726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.360 [2024-07-15 16:05:55.948774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.360 [2024-07-15 16:05:55.948787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.360 [2024-07-15 16:05:55.951562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.360 [2024-07-15 16:05:55.951607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.360 [2024-07-15 16:05:55.951620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.360 [2024-07-15 16:05:55.956425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.360 [2024-07-15 16:05:55.956469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.360 [2024-07-15 16:05:55.956482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.360 [2024-07-15 16:05:55.959660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.360 [2024-07-15 16:05:55.959705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.360 [2024-07-15 16:05:55.959718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.360 [2024-07-15 16:05:55.963657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.360 [2024-07-15 16:05:55.963700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.360 [2024-07-15 16:05:55.963714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.360 [2024-07-15 16:05:55.968201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.360 [2024-07-15 16:05:55.968246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.360 [2024-07-15 16:05:55.968261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.360 [2024-07-15 16:05:55.972918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.360 [2024-07-15 16:05:55.972978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.360 [2024-07-15 16:05:55.972995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.360 [2024-07-15 16:05:55.975992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.360 [2024-07-15 16:05:55.976050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.360 [2024-07-15 16:05:55.976063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.360 [2024-07-15 16:05:55.980817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.360 [2024-07-15 16:05:55.980864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.360 [2024-07-15 16:05:55.980877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.360 [2024-07-15 16:05:55.984644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.360 [2024-07-15 16:05:55.984687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.360 [2024-07-15 16:05:55.984701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.360 [2024-07-15 16:05:55.988627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.360 [2024-07-15 16:05:55.988674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.360 [2024-07-15 16:05:55.988688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.360 [2024-07-15 16:05:55.993100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.360 [2024-07-15 16:05:55.993148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.360 [2024-07-15 16:05:55.993161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.360 [2024-07-15 16:05:55.996826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.360 [2024-07-15 16:05:55.996871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.360 [2024-07-15 16:05:55.996885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.360 [2024-07-15 16:05:56.001058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.360 [2024-07-15 16:05:56.001101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.360 [2024-07-15 16:05:56.001115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.360 [2024-07-15 16:05:56.006091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.360 [2024-07-15 16:05:56.006137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.360 [2024-07-15 16:05:56.006151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.360 [2024-07-15 16:05:56.009434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.360 [2024-07-15 16:05:56.009473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.360 [2024-07-15 16:05:56.009487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.360 [2024-07-15 16:05:56.013775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.360 [2024-07-15 16:05:56.013822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.360 [2024-07-15 16:05:56.013836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.360 [2024-07-15 16:05:56.018315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.360 [2024-07-15 16:05:56.018378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.360 [2024-07-15 16:05:56.018392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.360 [2024-07-15 16:05:56.021893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.360 [2024-07-15 16:05:56.021971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.360 [2024-07-15 16:05:56.021986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.360 [2024-07-15 16:05:56.026434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.360 [2024-07-15 16:05:56.026479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.360 [2024-07-15 16:05:56.026493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.361 [2024-07-15 16:05:56.031472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.361 [2024-07-15 16:05:56.031517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.361 [2024-07-15 16:05:56.031530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.361 [2024-07-15 16:05:56.035243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.361 [2024-07-15 16:05:56.035304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.361 [2024-07-15 16:05:56.035318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.361 [2024-07-15 16:05:56.039743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.361 [2024-07-15 16:05:56.039788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.361 [2024-07-15 16:05:56.039801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.361 [2024-07-15 16:05:56.043780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.361 [2024-07-15 16:05:56.043825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.361 [2024-07-15 16:05:56.043839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.361 [2024-07-15 16:05:56.047608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.361 [2024-07-15 16:05:56.047654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.361 [2024-07-15 16:05:56.047668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.361 [2024-07-15 16:05:56.052187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.361 [2024-07-15 16:05:56.052232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.361 [2024-07-15 16:05:56.052246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.361 [2024-07-15 16:05:56.056003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.361 [2024-07-15 16:05:56.056047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.361 [2024-07-15 16:05:56.056061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.361 [2024-07-15 16:05:56.060589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.361 [2024-07-15 16:05:56.060635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.361 [2024-07-15 16:05:56.060648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.361 [2024-07-15 16:05:56.064464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.361 [2024-07-15 16:05:56.064505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.361 [2024-07-15 16:05:56.064519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.361 [2024-07-15 16:05:56.068142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.361 [2024-07-15 16:05:56.068187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.361 [2024-07-15 16:05:56.068200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.361 [2024-07-15 16:05:56.072370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.361 [2024-07-15 16:05:56.072415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.361 [2024-07-15 16:05:56.072429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.361 [2024-07-15 16:05:56.076361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.361 [2024-07-15 16:05:56.076418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.361 [2024-07-15 16:05:56.076431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.361 [2024-07-15 16:05:56.080816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.361 [2024-07-15 16:05:56.080860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.361 [2024-07-15 16:05:56.080874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.620 [2024-07-15 16:05:56.085694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.620 [2024-07-15 16:05:56.085738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.620 [2024-07-15 16:05:56.085751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.620 [2024-07-15 16:05:56.090093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.620 [2024-07-15 16:05:56.090138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.620 [2024-07-15 16:05:56.090151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.620 [2024-07-15 16:05:56.094709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.620 [2024-07-15 16:05:56.094754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.620 [2024-07-15 16:05:56.094767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.620 [2024-07-15 16:05:56.100567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.620 [2024-07-15 16:05:56.100627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.620 [2024-07-15 16:05:56.100640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.620 [2024-07-15 16:05:56.104536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.620 [2024-07-15 16:05:56.104580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.620 [2024-07-15 16:05:56.104592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.620 [2024-07-15 16:05:56.108861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.620 [2024-07-15 16:05:56.108906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.620 [2024-07-15 16:05:56.108921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.620 [2024-07-15 16:05:56.114007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.620 [2024-07-15 16:05:56.114050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.620 [2024-07-15 16:05:56.114064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.620 [2024-07-15 16:05:56.116731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.620 [2024-07-15 16:05:56.116769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.620 [2024-07-15 16:05:56.116783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.620 [2024-07-15 16:05:56.121892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.620 [2024-07-15 16:05:56.121948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.620 [2024-07-15 16:05:56.121979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.620 [2024-07-15 16:05:56.125419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.620 [2024-07-15 16:05:56.125459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.620 [2024-07-15 16:05:56.125472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.620 [2024-07-15 16:05:56.130223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.620 [2024-07-15 16:05:56.130283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.620 [2024-07-15 16:05:56.130313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.620 [2024-07-15 16:05:56.135149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.620 [2024-07-15 16:05:56.135193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.620 [2024-07-15 16:05:56.135223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.620 [2024-07-15 16:05:56.139444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.620 [2024-07-15 16:05:56.139487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.620 [2024-07-15 16:05:56.139501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.620 [2024-07-15 16:05:56.143187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.620 [2024-07-15 16:05:56.143231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.620 [2024-07-15 16:05:56.143244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.620 [2024-07-15 16:05:56.147751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.620 [2024-07-15 16:05:56.147793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.620 [2024-07-15 16:05:56.147822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.620 [2024-07-15 16:05:56.152095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.620 [2024-07-15 16:05:56.152153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.620 [2024-07-15 16:05:56.152183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.620 [2024-07-15 16:05:56.155744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.620 [2024-07-15 16:05:56.155785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.620 [2024-07-15 16:05:56.155814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.620 [2024-07-15 16:05:56.161016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.620 [2024-07-15 16:05:56.161058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.620 [2024-07-15 16:05:56.161088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.620 [2024-07-15 16:05:56.165681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.620 [2024-07-15 16:05:56.165724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.620 [2024-07-15 16:05:56.165755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.620 [2024-07-15 16:05:56.169576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.620 [2024-07-15 16:05:56.169632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.620 [2024-07-15 16:05:56.169662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.620 [2024-07-15 16:05:56.173836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.620 [2024-07-15 16:05:56.173879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.620 [2024-07-15 16:05:56.173918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.620 [2024-07-15 16:05:56.177070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.177109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.177123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.181196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.181239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.181269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.185860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.185926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.185940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.190710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.190754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.190785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.194052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.194092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.194105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.198359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.198403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.198416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.202709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.202752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.202783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.205875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.205938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.205952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.210615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.210659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.210673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.215189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.215233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.215247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.217941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.217987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.218000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.222813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.222855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.222886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.226381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.226423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.226453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.230821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.230863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.230894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.236004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.236047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.236061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.240844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.240889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.240902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.244747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.244789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.244803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.248067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.248109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.248122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.252368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.252412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.252426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.255679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.255721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.255735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.260128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.260172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.260186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.264124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.264166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.264180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.267534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.267579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.267592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.271819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.271864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.271877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.276401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.276444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.276458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.281649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.281691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.281704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.285202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.285245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.285258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.289284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.289329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.289342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.294044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.294087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.294100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.297032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.297068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.297081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.301454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.301499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.301513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.306452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.306498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.306512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.310990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.311030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.311044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.314953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.315008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.315022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.318164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.318207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.318220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.323179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.323223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.323237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.328289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.328332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.328346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.332988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.333030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.333043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.336370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.336413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.336427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.341482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.621 [2024-07-15 16:05:56.341527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-07-15 16:05:56.341541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.621 [2024-07-15 16:05:56.346682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.880 [2024-07-15 16:05:56.346728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.880 [2024-07-15 16:05:56.346742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.880 [2024-07-15 16:05:56.350061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.880 [2024-07-15 16:05:56.350104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.880 [2024-07-15 16:05:56.350118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.880 [2024-07-15 16:05:56.354571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.880 [2024-07-15 16:05:56.354612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.880 [2024-07-15 16:05:56.354625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.880 [2024-07-15 16:05:56.359683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.880 [2024-07-15 16:05:56.359728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.880 [2024-07-15 16:05:56.359742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.880 [2024-07-15 16:05:56.364798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.880 [2024-07-15 16:05:56.364841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.880 [2024-07-15 16:05:56.364855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.880 [2024-07-15 16:05:56.369775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.880 [2024-07-15 16:05:56.369818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.880 [2024-07-15 16:05:56.369831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.880 [2024-07-15 16:05:56.374191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.880 [2024-07-15 16:05:56.374235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.880 [2024-07-15 16:05:56.374248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.880 [2024-07-15 16:05:56.376885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.880 [2024-07-15 16:05:56.376924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.880 [2024-07-15 16:05:56.376937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.880 [2024-07-15 16:05:56.381100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.880 [2024-07-15 16:05:56.381144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.881 [2024-07-15 16:05:56.381158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.881 [2024-07-15 16:05:56.384911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.881 [2024-07-15 16:05:56.384977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.881 [2024-07-15 16:05:56.384999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.881 [2024-07-15 16:05:56.389433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.881 [2024-07-15 16:05:56.389476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.881 [2024-07-15 16:05:56.389497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.881 [2024-07-15 16:05:56.393281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.881 [2024-07-15 16:05:56.393324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.881 [2024-07-15 16:05:56.393338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.881 [2024-07-15 16:05:56.397686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.881 [2024-07-15 16:05:56.397739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.881 [2024-07-15 16:05:56.397754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.881 [2024-07-15 16:05:56.401882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.881 [2024-07-15 16:05:56.401932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.881 [2024-07-15 16:05:56.401946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.881 [2024-07-15 16:05:56.406327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.881 [2024-07-15 16:05:56.406372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.881 [2024-07-15 16:05:56.406386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.881 [2024-07-15 16:05:56.409742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.881 [2024-07-15 16:05:56.409795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.881 [2024-07-15 16:05:56.409816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.881 [2024-07-15 16:05:56.414918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.881 [2024-07-15 16:05:56.414972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.881 [2024-07-15 16:05:56.414987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.881 [2024-07-15 16:05:56.419355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.881 [2024-07-15 16:05:56.419397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.881 [2024-07-15 16:05:56.419410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.881 [2024-07-15 16:05:56.422617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.881 [2024-07-15 16:05:56.422672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.881 [2024-07-15 16:05:56.422686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.881 [2024-07-15 16:05:56.427299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.881 [2024-07-15 16:05:56.427343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.881 [2024-07-15 16:05:56.427356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.881 [2024-07-15 16:05:56.432680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.881 [2024-07-15 16:05:56.432722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.881 [2024-07-15 16:05:56.432735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.881 [2024-07-15 16:05:56.437496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.881 [2024-07-15 16:05:56.437539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.881 [2024-07-15 16:05:56.437553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.881 [2024-07-15 16:05:56.440993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.881 [2024-07-15 16:05:56.441030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.881 [2024-07-15 16:05:56.441043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.881 [2024-07-15 16:05:56.445778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.881 [2024-07-15 16:05:56.445820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.881 [2024-07-15 16:05:56.445834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.881 [2024-07-15 16:05:56.450542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.881 [2024-07-15 16:05:56.450586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.881 [2024-07-15 16:05:56.450599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.881 [2024-07-15 16:05:56.455479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.881 [2024-07-15 16:05:56.455524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.881 [2024-07-15 16:05:56.455538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.881 [2024-07-15 16:05:56.458195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.881 [2024-07-15 16:05:56.458238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.881 [2024-07-15 16:05:56.458250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.881 [2024-07-15 16:05:56.463367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.881 [2024-07-15 16:05:56.463412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.881 [2024-07-15 16:05:56.463426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.881 [2024-07-15 16:05:56.467872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.881 [2024-07-15 16:05:56.467915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.881 [2024-07-15 16:05:56.467929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.881 [2024-07-15 16:05:56.471247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.881 [2024-07-15 16:05:56.471291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.881 [2024-07-15 16:05:56.471304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.881 [2024-07-15 16:05:56.475829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.881 [2024-07-15 16:05:56.475874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.881 [2024-07-15 16:05:56.475888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.881 [2024-07-15 16:05:56.479181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.881 [2024-07-15 16:05:56.479226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.881 [2024-07-15 16:05:56.479240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.881 [2024-07-15 16:05:56.483450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.881 [2024-07-15 16:05:56.483501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.881 [2024-07-15 16:05:56.483514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.882 [2024-07-15 16:05:56.487208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.882 [2024-07-15 16:05:56.487252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.882 [2024-07-15 16:05:56.487266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.882 [2024-07-15 16:05:56.491315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.882 [2024-07-15 16:05:56.491357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.882 [2024-07-15 16:05:56.491370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.882 [2024-07-15 16:05:56.496207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.882 [2024-07-15 16:05:56.496250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.882 [2024-07-15 16:05:56.496263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.882 [2024-07-15 16:05:56.499511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.882 [2024-07-15 16:05:56.499554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.882 [2024-07-15 16:05:56.499567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.882 [2024-07-15 16:05:56.503546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.882 [2024-07-15 16:05:56.503589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.882 [2024-07-15 16:05:56.503604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.882 [2024-07-15 16:05:56.507598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.882 [2024-07-15 16:05:56.507640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.882 [2024-07-15 16:05:56.507670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.882 [2024-07-15 16:05:56.512468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.882 [2024-07-15 16:05:56.512511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.882 [2024-07-15 16:05:56.512525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.882 [2024-07-15 16:05:56.516043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.882 [2024-07-15 16:05:56.516088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.882 [2024-07-15 16:05:56.516101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.882 [2024-07-15 16:05:56.520548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.882 [2024-07-15 16:05:56.520592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.882 [2024-07-15 16:05:56.520606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.882 [2024-07-15 16:05:56.524940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.882 [2024-07-15 16:05:56.524992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.882 [2024-07-15 16:05:56.525006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.882 [2024-07-15 16:05:56.528701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.882 [2024-07-15 16:05:56.528741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.882 [2024-07-15 16:05:56.528754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.882 [2024-07-15 16:05:56.533183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.882 [2024-07-15 16:05:56.533227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.882 [2024-07-15 16:05:56.533241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.882 [2024-07-15 16:05:56.536944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.882 [2024-07-15 16:05:56.536999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.882 [2024-07-15 16:05:56.537014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.882 [2024-07-15 16:05:56.540879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.882 [2024-07-15 16:05:56.540924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.882 [2024-07-15 16:05:56.540938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.882 [2024-07-15 16:05:56.545085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.882 [2024-07-15 16:05:56.545128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.882 [2024-07-15 16:05:56.545142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.882 [2024-07-15 16:05:56.549010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.882 [2024-07-15 16:05:56.549054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.882 [2024-07-15 16:05:56.549068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.882 [2024-07-15 16:05:56.552668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.882 [2024-07-15 16:05:56.552712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.882 [2024-07-15 16:05:56.552725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.882 [2024-07-15 16:05:56.557081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.882 [2024-07-15 16:05:56.557124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.882 [2024-07-15 16:05:56.557138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.882 [2024-07-15 16:05:56.561253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.882 [2024-07-15 16:05:56.561293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.882 [2024-07-15 16:05:56.561306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.882 [2024-07-15 16:05:56.564951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.882 [2024-07-15 16:05:56.565006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.882 [2024-07-15 16:05:56.565020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.882 [2024-07-15 16:05:56.569254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.882 [2024-07-15 16:05:56.569308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.882 [2024-07-15 16:05:56.569322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.882 [2024-07-15 16:05:56.573274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.882 [2024-07-15 16:05:56.573318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.882 [2024-07-15 16:05:56.573331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.882 [2024-07-15 16:05:56.577540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.882 [2024-07-15 16:05:56.577582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.882 [2024-07-15 16:05:56.577595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.882 [2024-07-15 16:05:56.582653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.882 [2024-07-15 16:05:56.582706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.882 [2024-07-15 16:05:56.582720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.882 [2024-07-15 16:05:56.586404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.882 [2024-07-15 16:05:56.586442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.883 [2024-07-15 16:05:56.586472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.883 [2024-07-15 16:05:56.590688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.883 [2024-07-15 16:05:56.590731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.883 [2024-07-15 16:05:56.590761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.883 [2024-07-15 16:05:56.595135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.883 [2024-07-15 16:05:56.595179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.883 [2024-07-15 16:05:56.595192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.883 [2024-07-15 16:05:56.599075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.883 [2024-07-15 16:05:56.599120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.883 [2024-07-15 16:05:56.599133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.883 [2024-07-15 16:05:56.603739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:02.883 [2024-07-15 16:05:56.603790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.883 [2024-07-15 16:05:56.603804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.142 [2024-07-15 16:05:56.607667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.142 [2024-07-15 16:05:56.607710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.142 [2024-07-15 16:05:56.607724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.142 [2024-07-15 16:05:56.611607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.142 [2024-07-15 16:05:56.611647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.142 [2024-07-15 16:05:56.611676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.142 [2024-07-15 16:05:56.615606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.142 [2024-07-15 16:05:56.615646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.142 [2024-07-15 16:05:56.615675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.142 [2024-07-15 16:05:56.619867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.142 [2024-07-15 16:05:56.619909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.142 [2024-07-15 16:05:56.619922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.142 [2024-07-15 16:05:56.624352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.142 [2024-07-15 16:05:56.624393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.142 [2024-07-15 16:05:56.624423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.142 [2024-07-15 16:05:56.627540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.142 [2024-07-15 16:05:56.627583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.142 [2024-07-15 16:05:56.627612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.142 [2024-07-15 16:05:56.632321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.142 [2024-07-15 16:05:56.632363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.142 [2024-07-15 16:05:56.632393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.142 [2024-07-15 16:05:56.637408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.142 [2024-07-15 16:05:56.637463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.142 [2024-07-15 16:05:56.637476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.142 [2024-07-15 16:05:56.641862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.142 [2024-07-15 16:05:56.641910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.142 [2024-07-15 16:05:56.641941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.142 [2024-07-15 16:05:56.645738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.142 [2024-07-15 16:05:56.645775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.142 [2024-07-15 16:05:56.645804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.142 [2024-07-15 16:05:56.650443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.142 [2024-07-15 16:05:56.650485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.142 [2024-07-15 16:05:56.650514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.142 [2024-07-15 16:05:56.653857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.142 [2024-07-15 16:05:56.653919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.142 [2024-07-15 16:05:56.653933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.142 [2024-07-15 16:05:56.658596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.142 [2024-07-15 16:05:56.658643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.142 [2024-07-15 16:05:56.658673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.142 [2024-07-15 16:05:56.663566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.142 [2024-07-15 16:05:56.663606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.142 [2024-07-15 16:05:56.663636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.142 [2024-07-15 16:05:56.667851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.142 [2024-07-15 16:05:56.667895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.142 [2024-07-15 16:05:56.667908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.142 [2024-07-15 16:05:56.671267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.143 [2024-07-15 16:05:56.671309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.143 [2024-07-15 16:05:56.671339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.143 [2024-07-15 16:05:56.675784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.143 [2024-07-15 16:05:56.675828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.143 [2024-07-15 16:05:56.675842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.143 [2024-07-15 16:05:56.679405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.143 [2024-07-15 16:05:56.679449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.143 [2024-07-15 16:05:56.679462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.143 [2024-07-15 16:05:56.683222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.143 [2024-07-15 16:05:56.683271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.143 [2024-07-15 16:05:56.683284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.143 [2024-07-15 16:05:56.687569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.143 [2024-07-15 16:05:56.687614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.143 [2024-07-15 16:05:56.687627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.143 [2024-07-15 16:05:56.691198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.143 [2024-07-15 16:05:56.691242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.143 [2024-07-15 16:05:56.691256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.143 [2024-07-15 16:05:56.695117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.143 [2024-07-15 16:05:56.695160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.143 [2024-07-15 16:05:56.695174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.143 [2024-07-15 16:05:56.699651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.143 [2024-07-15 16:05:56.699696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.143 [2024-07-15 16:05:56.699710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.143 [2024-07-15 16:05:56.703426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.143 [2024-07-15 16:05:56.703470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.143 [2024-07-15 16:05:56.703483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.143 [2024-07-15 16:05:56.707851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.143 [2024-07-15 16:05:56.707897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.143 [2024-07-15 16:05:56.707910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.143 [2024-07-15 16:05:56.711764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.143 [2024-07-15 16:05:56.711807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.143 [2024-07-15 16:05:56.711821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.143 [2024-07-15 16:05:56.715258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.143 [2024-07-15 16:05:56.715300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.143 [2024-07-15 16:05:56.715314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.143 [2024-07-15 16:05:56.720001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.143 [2024-07-15 16:05:56.720044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.143 [2024-07-15 16:05:56.720058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.143 [2024-07-15 16:05:56.723330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.143 [2024-07-15 16:05:56.723374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.143 [2024-07-15 16:05:56.723388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.143 [2024-07-15 16:05:56.727719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.143 [2024-07-15 16:05:56.727764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.143 [2024-07-15 16:05:56.727778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.143 [2024-07-15 16:05:56.732079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.143 [2024-07-15 16:05:56.732124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.143 [2024-07-15 16:05:56.732137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.143 [2024-07-15 16:05:56.735594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.143 [2024-07-15 16:05:56.735637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.143 [2024-07-15 16:05:56.735650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.143 [2024-07-15 16:05:56.739050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.143 [2024-07-15 16:05:56.739093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.143 [2024-07-15 16:05:56.739107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.143 [2024-07-15 16:05:56.743687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.143 [2024-07-15 16:05:56.743732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.143 [2024-07-15 16:05:56.743745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.143 [2024-07-15 16:05:56.748523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.143 [2024-07-15 16:05:56.748568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.143 [2024-07-15 16:05:56.748582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.143 [2024-07-15 16:05:56.751216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.143 [2024-07-15 16:05:56.751258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.143 [2024-07-15 16:05:56.751271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.143 [2024-07-15 16:05:56.755896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.143 [2024-07-15 16:05:56.755942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.143 [2024-07-15 16:05:56.755968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.143 [2024-07-15 16:05:56.759784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.143 [2024-07-15 16:05:56.759825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.143 [2024-07-15 16:05:56.759839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.143 [2024-07-15 16:05:56.764200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.143 [2024-07-15 16:05:56.764244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.143 [2024-07-15 16:05:56.764257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.143 [2024-07-15 16:05:56.768202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.143 [2024-07-15 16:05:56.768252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.143 [2024-07-15 16:05:56.768265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.143 [2024-07-15 16:05:56.771830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.143 [2024-07-15 16:05:56.771876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.143 [2024-07-15 16:05:56.771889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.143 [2024-07-15 16:05:56.776261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.143 [2024-07-15 16:05:56.776305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.143 [2024-07-15 16:05:56.776319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.143 [2024-07-15 16:05:56.779809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.143 [2024-07-15 16:05:56.779854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.143 [2024-07-15 16:05:56.779867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.143 [2024-07-15 16:05:56.783854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.143 [2024-07-15 16:05:56.783898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.143 [2024-07-15 16:05:56.783911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.143 [2024-07-15 16:05:56.787258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.143 [2024-07-15 16:05:56.787303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.143 [2024-07-15 16:05:56.787317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.143 [2024-07-15 16:05:56.791763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.143 [2024-07-15 16:05:56.791807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.143 [2024-07-15 16:05:56.791821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.143 [2024-07-15 16:05:56.796145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.144 [2024-07-15 16:05:56.796188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.144 [2024-07-15 16:05:56.796202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.144 [2024-07-15 16:05:56.799708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.144 [2024-07-15 16:05:56.799753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.144 [2024-07-15 16:05:56.799766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.144 [2024-07-15 16:05:56.803517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.144 [2024-07-15 16:05:56.803562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.144 [2024-07-15 16:05:56.803575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.144 [2024-07-15 16:05:56.808151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.144 [2024-07-15 16:05:56.808199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.144 [2024-07-15 16:05:56.808212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.144 [2024-07-15 16:05:56.812177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.144 [2024-07-15 16:05:56.812221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.144 [2024-07-15 16:05:56.812235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.144 [2024-07-15 16:05:56.815960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.144 [2024-07-15 16:05:56.816015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.144 [2024-07-15 16:05:56.816029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.144 [2024-07-15 16:05:56.820307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.144 [2024-07-15 16:05:56.820351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.144 [2024-07-15 16:05:56.820395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.144 [2024-07-15 16:05:56.824100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.144 [2024-07-15 16:05:56.824145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.144 [2024-07-15 16:05:56.824158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.144 [2024-07-15 16:05:56.828001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.144 [2024-07-15 16:05:56.828044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.144 [2024-07-15 16:05:56.828058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.144 [2024-07-15 16:05:56.832113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.144 [2024-07-15 16:05:56.832157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.144 [2024-07-15 16:05:56.832170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.144 [2024-07-15 16:05:56.836491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.144 [2024-07-15 16:05:56.836535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.144 [2024-07-15 16:05:56.836548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.144 [2024-07-15 16:05:56.840486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.144 [2024-07-15 16:05:56.840530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.144 [2024-07-15 16:05:56.840544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.144 [2024-07-15 16:05:56.843854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.144 [2024-07-15 16:05:56.843897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.144 [2024-07-15 16:05:56.843910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.144 [2024-07-15 16:05:56.848246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.144 [2024-07-15 16:05:56.848291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.144 [2024-07-15 16:05:56.848305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.144 [2024-07-15 16:05:56.852547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.144 [2024-07-15 16:05:56.852592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.144 [2024-07-15 16:05:56.852605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.144 [2024-07-15 16:05:56.856345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.144 [2024-07-15 16:05:56.856387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.144 [2024-07-15 16:05:56.856400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.144 [2024-07-15 16:05:56.860853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.144 [2024-07-15 16:05:56.860905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.144 [2024-07-15 16:05:56.860935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.144 [2024-07-15 16:05:56.864665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.144 [2024-07-15 16:05:56.864707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.144 [2024-07-15 16:05:56.864737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.403 [2024-07-15 16:05:56.868344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.403 [2024-07-15 16:05:56.868403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.403 [2024-07-15 16:05:56.868432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.403 [2024-07-15 16:05:56.872770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.403 [2024-07-15 16:05:56.872814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.403 [2024-07-15 16:05:56.872844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.403 [2024-07-15 16:05:56.876338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.403 [2024-07-15 16:05:56.876395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.403 [2024-07-15 16:05:56.876424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.403 [2024-07-15 16:05:56.880651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.403 [2024-07-15 16:05:56.880696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.403 [2024-07-15 16:05:56.880710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.403 [2024-07-15 16:05:56.884225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.403 [2024-07-15 16:05:56.884267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.403 [2024-07-15 16:05:56.884297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.403 [2024-07-15 16:05:56.888273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.403 [2024-07-15 16:05:56.888316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.403 [2024-07-15 16:05:56.888345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.404 [2024-07-15 16:05:56.892695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.404 [2024-07-15 16:05:56.892738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.404 [2024-07-15 16:05:56.892767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.404 [2024-07-15 16:05:56.896391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.404 [2024-07-15 16:05:56.896433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.404 [2024-07-15 16:05:56.896447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.404 [2024-07-15 16:05:56.900890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.404 [2024-07-15 16:05:56.900934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.404 [2024-07-15 16:05:56.900964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.404 [2024-07-15 16:05:56.905507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.404 [2024-07-15 16:05:56.905551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.404 [2024-07-15 16:05:56.905580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.404 [2024-07-15 16:05:56.909820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.404 [2024-07-15 16:05:56.909864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.404 [2024-07-15 16:05:56.909893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.404 [2024-07-15 16:05:56.912513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.404 [2024-07-15 16:05:56.912550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.404 [2024-07-15 16:05:56.912563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.404 [2024-07-15 16:05:56.917521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.404 [2024-07-15 16:05:56.917564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.404 [2024-07-15 16:05:56.917593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.404 [2024-07-15 16:05:56.921790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.404 [2024-07-15 16:05:56.921828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.404 [2024-07-15 16:05:56.921857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.404 [2024-07-15 16:05:56.925040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.404 [2024-07-15 16:05:56.925079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.404 [2024-07-15 16:05:56.925107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.404 [2024-07-15 16:05:56.929447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.404 [2024-07-15 16:05:56.929490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.404 [2024-07-15 16:05:56.929504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.404 [2024-07-15 16:05:56.933543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.404 [2024-07-15 16:05:56.933584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.404 [2024-07-15 16:05:56.933613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.404 [2024-07-15 16:05:56.937180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.404 [2024-07-15 16:05:56.937222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.404 [2024-07-15 16:05:56.937252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.404 [2024-07-15 16:05:56.941412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.404 [2024-07-15 16:05:56.941454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.404 [2024-07-15 16:05:56.941483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.404 [2024-07-15 16:05:56.945174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.404 [2024-07-15 16:05:56.945214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.404 [2024-07-15 16:05:56.945227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.404 [2024-07-15 16:05:56.948722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.404 [2024-07-15 16:05:56.948764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.404 [2024-07-15 16:05:56.948794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.404 [2024-07-15 16:05:56.952492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.404 [2024-07-15 16:05:56.952533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.404 [2024-07-15 16:05:56.952562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.404 [2024-07-15 16:05:56.956774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.404 [2024-07-15 16:05:56.956816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.404 [2024-07-15 16:05:56.956845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.404 [2024-07-15 16:05:56.960878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.404 [2024-07-15 16:05:56.960937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.404 [2024-07-15 16:05:56.960951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.404 [2024-07-15 16:05:56.964903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.404 [2024-07-15 16:05:56.964949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.404 [2024-07-15 16:05:56.964990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.404 [2024-07-15 16:05:56.969049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.404 [2024-07-15 16:05:56.969101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.404 [2024-07-15 16:05:56.969115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.404 [2024-07-15 16:05:56.973116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.404 [2024-07-15 16:05:56.973160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.404 [2024-07-15 16:05:56.973173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.404 [2024-07-15 16:05:56.976926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.404 [2024-07-15 16:05:56.976999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.404 [2024-07-15 16:05:56.977013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.404 [2024-07-15 16:05:56.981245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.404 [2024-07-15 16:05:56.981287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.404 [2024-07-15 16:05:56.981301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.404 [2024-07-15 16:05:56.985553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.404 [2024-07-15 16:05:56.985598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.404 [2024-07-15 16:05:56.985611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.404 [2024-07-15 16:05:56.989909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.404 [2024-07-15 16:05:56.989952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.404 [2024-07-15 16:05:56.989981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.404 [2024-07-15 16:05:56.993133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.404 [2024-07-15 16:05:56.993172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.404 [2024-07-15 16:05:56.993185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.404 [2024-07-15 16:05:56.997172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.404 [2024-07-15 16:05:56.997216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.404 [2024-07-15 16:05:56.997229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.404 [2024-07-15 16:05:57.002300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.404 [2024-07-15 16:05:57.002342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.404 [2024-07-15 16:05:57.002355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.404 [2024-07-15 16:05:57.007049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.404 [2024-07-15 16:05:57.007088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.405 [2024-07-15 16:05:57.007101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.405 [2024-07-15 16:05:57.011151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.405 [2024-07-15 16:05:57.011196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.405 [2024-07-15 16:05:57.011209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.405 [2024-07-15 16:05:57.015423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.405 [2024-07-15 16:05:57.015467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.405 [2024-07-15 16:05:57.015480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.405 [2024-07-15 16:05:57.020228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.405 [2024-07-15 16:05:57.020272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.405 [2024-07-15 16:05:57.020286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.405 [2024-07-15 16:05:57.023928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.405 [2024-07-15 16:05:57.023986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.405 [2024-07-15 16:05:57.024000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.405 [2024-07-15 16:05:57.028739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.405 [2024-07-15 16:05:57.028781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.405 [2024-07-15 16:05:57.028795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.405 [2024-07-15 16:05:57.034124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.405 [2024-07-15 16:05:57.034167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.405 [2024-07-15 16:05:57.034184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.405 [2024-07-15 16:05:57.038150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.405 [2024-07-15 16:05:57.038194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.405 [2024-07-15 16:05:57.038207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.405 [2024-07-15 16:05:57.042403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.405 [2024-07-15 16:05:57.042445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.405 [2024-07-15 16:05:57.042458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.405 [2024-07-15 16:05:57.046711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.405 [2024-07-15 16:05:57.046756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.405 [2024-07-15 16:05:57.046769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.405 [2024-07-15 16:05:57.051467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.405 [2024-07-15 16:05:57.051512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.405 [2024-07-15 16:05:57.051525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.405 [2024-07-15 16:05:57.055593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.405 [2024-07-15 16:05:57.055637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.405 [2024-07-15 16:05:57.055651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.405 [2024-07-15 16:05:57.059350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.405 [2024-07-15 16:05:57.059394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.405 [2024-07-15 16:05:57.059407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.405 [2024-07-15 16:05:57.063496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.405 [2024-07-15 16:05:57.063539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.405 [2024-07-15 16:05:57.063569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.405 [2024-07-15 16:05:57.067466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.405 [2024-07-15 16:05:57.067510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.405 [2024-07-15 16:05:57.067523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.405 [2024-07-15 16:05:57.072073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.405 [2024-07-15 16:05:57.072115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.405 [2024-07-15 16:05:57.072128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.405 [2024-07-15 16:05:57.075826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.405 [2024-07-15 16:05:57.075872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.405 [2024-07-15 16:05:57.075886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.405 [2024-07-15 16:05:57.080107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.405 [2024-07-15 16:05:57.080166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.405 [2024-07-15 16:05:57.080195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.405 [2024-07-15 16:05:57.084606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.405 [2024-07-15 16:05:57.084649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.405 [2024-07-15 16:05:57.084663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.405 [2024-07-15 16:05:57.087859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.405 [2024-07-15 16:05:57.087902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.405 [2024-07-15 16:05:57.087915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.405 [2024-07-15 16:05:57.093048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.405 [2024-07-15 16:05:57.093094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.405 [2024-07-15 16:05:57.093107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.405 [2024-07-15 16:05:57.098042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.405 [2024-07-15 16:05:57.098087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.405 [2024-07-15 16:05:57.098101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.405 [2024-07-15 16:05:57.102207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.405 [2024-07-15 16:05:57.102268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.405 [2024-07-15 16:05:57.102297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.405 [2024-07-15 16:05:57.105461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.405 [2024-07-15 16:05:57.105499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.405 [2024-07-15 16:05:57.105512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.405 [2024-07-15 16:05:57.109701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.405 [2024-07-15 16:05:57.109743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.405 [2024-07-15 16:05:57.109773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.405 [2024-07-15 16:05:57.114140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.405 [2024-07-15 16:05:57.114185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.405 [2024-07-15 16:05:57.114198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.405 [2024-07-15 16:05:57.118166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.405 [2024-07-15 16:05:57.118211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.405 [2024-07-15 16:05:57.118224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.405 [2024-07-15 16:05:57.121572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.405 [2024-07-15 16:05:57.121611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.405 [2024-07-15 16:05:57.121625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.405 [2024-07-15 16:05:57.125856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.405 [2024-07-15 16:05:57.125906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.405 [2024-07-15 16:05:57.125920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.405 [2024-07-15 16:05:57.129251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.406 [2024-07-15 16:05:57.129291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.406 [2024-07-15 16:05:57.129305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.664 [2024-07-15 16:05:57.133637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.664 [2024-07-15 16:05:57.133678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.664 [2024-07-15 16:05:57.133692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.664 [2024-07-15 16:05:57.136818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.664 [2024-07-15 16:05:57.136858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.664 [2024-07-15 16:05:57.136872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.664 [2024-07-15 16:05:57.141478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.664 [2024-07-15 16:05:57.141523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.664 [2024-07-15 16:05:57.141537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.664 [2024-07-15 16:05:57.146319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.664 [2024-07-15 16:05:57.146365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.664 [2024-07-15 16:05:57.146379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.664 [2024-07-15 16:05:57.149630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.664 [2024-07-15 16:05:57.149669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.664 [2024-07-15 16:05:57.149682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.664 [2024-07-15 16:05:57.154167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.664 [2024-07-15 16:05:57.154212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.664 [2024-07-15 16:05:57.154225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.664 [2024-07-15 16:05:57.158650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.664 [2024-07-15 16:05:57.158693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.664 [2024-07-15 16:05:57.158706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.664 [2024-07-15 16:05:57.162424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.664 [2024-07-15 16:05:57.162468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.664 [2024-07-15 16:05:57.162482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.664 [2024-07-15 16:05:57.166235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.664 [2024-07-15 16:05:57.166280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.664 [2024-07-15 16:05:57.166293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.664 [2024-07-15 16:05:57.170822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.664 [2024-07-15 16:05:57.170863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.664 [2024-07-15 16:05:57.170876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.664 [2024-07-15 16:05:57.173887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.664 [2024-07-15 16:05:57.173935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.664 [2024-07-15 16:05:57.173949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.664 [2024-07-15 16:05:57.177836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.664 [2024-07-15 16:05:57.177876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.664 [2024-07-15 16:05:57.177889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.664 [2024-07-15 16:05:57.182659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.664 [2024-07-15 16:05:57.182703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.664 [2024-07-15 16:05:57.182716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.664 [2024-07-15 16:05:57.187518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.664 [2024-07-15 16:05:57.187562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.664 [2024-07-15 16:05:57.187576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.664 [2024-07-15 16:05:57.191703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.664 [2024-07-15 16:05:57.191748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.664 [2024-07-15 16:05:57.191761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.664 [2024-07-15 16:05:57.194791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.664 [2024-07-15 16:05:57.194830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.664 [2024-07-15 16:05:57.194843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.664 [2024-07-15 16:05:57.199123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.664 [2024-07-15 16:05:57.199166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.664 [2024-07-15 16:05:57.199180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.664 [2024-07-15 16:05:57.203566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.664 [2024-07-15 16:05:57.203611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.664 [2024-07-15 16:05:57.203625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.664 [2024-07-15 16:05:57.207194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.664 [2024-07-15 16:05:57.207239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.664 [2024-07-15 16:05:57.207253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.664 [2024-07-15 16:05:57.211588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.664 [2024-07-15 16:05:57.211631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.664 [2024-07-15 16:05:57.211644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.664 [2024-07-15 16:05:57.215771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.664 [2024-07-15 16:05:57.215815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.664 [2024-07-15 16:05:57.215829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.664 [2024-07-15 16:05:57.219326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.664 [2024-07-15 16:05:57.219369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.664 [2024-07-15 16:05:57.219382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.664 [2024-07-15 16:05:57.223710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.664 [2024-07-15 16:05:57.223754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.664 [2024-07-15 16:05:57.223767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.664 [2024-07-15 16:05:57.228494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.664 [2024-07-15 16:05:57.228537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.664 [2024-07-15 16:05:57.228552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.664 [2024-07-15 16:05:57.233144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.664 [2024-07-15 16:05:57.233187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.664 [2024-07-15 16:05:57.233200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.664 [2024-07-15 16:05:57.237855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.664 [2024-07-15 16:05:57.237937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.664 [2024-07-15 16:05:57.237951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.664 [2024-07-15 16:05:57.240658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.664 [2024-07-15 16:05:57.240711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.664 [2024-07-15 16:05:57.240739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.664 [2024-07-15 16:05:57.245657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.664 [2024-07-15 16:05:57.245716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.664 [2024-07-15 16:05:57.245744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.664 [2024-07-15 16:05:57.248772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.665 [2024-07-15 16:05:57.248814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.665 [2024-07-15 16:05:57.248827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.665 [2024-07-15 16:05:57.252844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.665 [2024-07-15 16:05:57.252892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.665 [2024-07-15 16:05:57.252906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.665 [2024-07-15 16:05:57.257384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.665 [2024-07-15 16:05:57.257431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.665 [2024-07-15 16:05:57.257445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.665 [2024-07-15 16:05:57.260574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.665 [2024-07-15 16:05:57.260618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.665 [2024-07-15 16:05:57.260633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.665 [2024-07-15 16:05:57.265413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.665 [2024-07-15 16:05:57.265459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.665 [2024-07-15 16:05:57.265473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.665 [2024-07-15 16:05:57.269618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.665 [2024-07-15 16:05:57.269662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.665 [2024-07-15 16:05:57.269676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.665 [2024-07-15 16:05:57.273241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.665 [2024-07-15 16:05:57.273282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.665 [2024-07-15 16:05:57.273295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.665 [2024-07-15 16:05:57.278069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.665 [2024-07-15 16:05:57.278112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.665 [2024-07-15 16:05:57.278125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.665 [2024-07-15 16:05:57.281610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.665 [2024-07-15 16:05:57.281650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.665 [2024-07-15 16:05:57.281664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.665 [2024-07-15 16:05:57.285601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.665 [2024-07-15 16:05:57.285644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.665 [2024-07-15 16:05:57.285657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.665 [2024-07-15 16:05:57.290356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.665 [2024-07-15 16:05:57.290400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.665 [2024-07-15 16:05:57.290413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.665 [2024-07-15 16:05:57.294197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.665 [2024-07-15 16:05:57.294241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.665 [2024-07-15 16:05:57.294254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.665 [2024-07-15 16:05:57.297988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.665 [2024-07-15 16:05:57.298027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.665 [2024-07-15 16:05:57.298040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.665 [2024-07-15 16:05:57.302338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.665 [2024-07-15 16:05:57.302382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.665 [2024-07-15 16:05:57.302395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.665 [2024-07-15 16:05:57.306204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.665 [2024-07-15 16:05:57.306248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.665 [2024-07-15 16:05:57.306262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.665 [2024-07-15 16:05:57.310110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.665 [2024-07-15 16:05:57.310159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.665 [2024-07-15 16:05:57.310172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.665 [2024-07-15 16:05:57.314123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.665 [2024-07-15 16:05:57.314166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.665 [2024-07-15 16:05:57.314187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.665 [2024-07-15 16:05:57.318787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.665 [2024-07-15 16:05:57.318833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.665 [2024-07-15 16:05:57.318846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.665 [2024-07-15 16:05:57.322795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.665 [2024-07-15 16:05:57.322840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.665 [2024-07-15 16:05:57.322853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.665 [2024-07-15 16:05:57.327103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.665 [2024-07-15 16:05:57.327147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.665 [2024-07-15 16:05:57.327178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.665 [2024-07-15 16:05:57.331311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.665 [2024-07-15 16:05:57.331356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.665 [2024-07-15 16:05:57.331369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.665 [2024-07-15 16:05:57.335290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.665 [2024-07-15 16:05:57.335335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.665 [2024-07-15 16:05:57.335350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.665 [2024-07-15 16:05:57.339307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.665 [2024-07-15 16:05:57.339349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.665 [2024-07-15 16:05:57.339378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.665 [2024-07-15 16:05:57.343284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.665 [2024-07-15 16:05:57.343328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.665 [2024-07-15 16:05:57.343342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.665 [2024-07-15 16:05:57.348250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.665 [2024-07-15 16:05:57.348302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.665 [2024-07-15 16:05:57.348317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.665 [2024-07-15 16:05:57.351707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.665 [2024-07-15 16:05:57.351751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.665 [2024-07-15 16:05:57.351764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.665 [2024-07-15 16:05:57.356408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.665 [2024-07-15 16:05:57.356453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.665 [2024-07-15 16:05:57.356467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.665 [2024-07-15 16:05:57.360846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.665 [2024-07-15 16:05:57.360891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.665 [2024-07-15 16:05:57.360921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.665 [2024-07-15 16:05:57.364019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.665 [2024-07-15 16:05:57.364063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.665 [2024-07-15 16:05:57.364076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.665 [2024-07-15 16:05:57.368404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.666 [2024-07-15 16:05:57.368448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.666 [2024-07-15 16:05:57.368462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.666 [2024-07-15 16:05:57.373420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.666 [2024-07-15 16:05:57.373466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.666 [2024-07-15 16:05:57.373480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.666 [2024-07-15 16:05:57.378091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.666 [2024-07-15 16:05:57.378135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.666 [2024-07-15 16:05:57.378149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.666 [2024-07-15 16:05:57.381068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.666 [2024-07-15 16:05:57.381107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.666 [2024-07-15 16:05:57.381120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.666 [2024-07-15 16:05:57.386383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.666 [2024-07-15 16:05:57.386426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.666 [2024-07-15 16:05:57.386456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.923 [2024-07-15 16:05:57.391572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.923 [2024-07-15 16:05:57.391616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.923 [2024-07-15 16:05:57.391646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.923 [2024-07-15 16:05:57.396509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.923 [2024-07-15 16:05:57.396553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.923 [2024-07-15 16:05:57.396582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.923 [2024-07-15 16:05:57.400163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.923 [2024-07-15 16:05:57.400208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.923 [2024-07-15 16:05:57.400222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.923 [2024-07-15 16:05:57.404612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.923 [2024-07-15 16:05:57.404656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.923 [2024-07-15 16:05:57.404670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.923 [2024-07-15 16:05:57.409450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.923 [2024-07-15 16:05:57.409496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.923 [2024-07-15 16:05:57.409509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.923 [2024-07-15 16:05:57.414635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.923 [2024-07-15 16:05:57.414681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.923 [2024-07-15 16:05:57.414695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.923 [2024-07-15 16:05:57.417625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.923 [2024-07-15 16:05:57.417664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.923 [2024-07-15 16:05:57.417677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.923 [2024-07-15 16:05:57.421874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7f380) 00:20:03.923 [2024-07-15 16:05:57.421926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.923 [2024-07-15 16:05:57.421941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.923 00:20:03.923 Latency(us) 00:20:03.923 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.923 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:03.923 nvme0n1 : 2.00 7403.34 925.42 0.00 0.00 2157.17 618.12 10366.60 00:20:03.923 =================================================================================================================== 00:20:03.923 Total : 7403.34 925.42 0.00 0.00 2157.17 618.12 10366.60 00:20:03.923 0 00:20:03.923 16:05:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:03.923 16:05:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:03.923 16:05:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:03.923 16:05:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:03.923 | .driver_specific 00:20:03.923 | .nvme_error 00:20:03.923 | .status_code 00:20:03.923 | .command_transient_transport_error' 00:20:04.180 16:05:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 477 > 0 )) 00:20:04.180 16:05:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94154 00:20:04.180 16:05:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 94154 ']' 00:20:04.180 16:05:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 94154 00:20:04.180 16:05:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:04.180 16:05:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:04.180 16:05:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94154 00:20:04.180 16:05:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:04.180 16:05:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:04.180 16:05:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94154' 00:20:04.180 killing process with pid 94154 00:20:04.180 16:05:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 94154 00:20:04.180 Received shutdown signal, test time was about 2.000000 seconds 00:20:04.180 00:20:04.180 Latency(us) 00:20:04.180 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.180 =================================================================================================================== 00:20:04.180 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:04.180 16:05:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 94154 00:20:04.437 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:20:04.437 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:04.437 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:04.437 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:04.437 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:04.437 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94244 00:20:04.437 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94244 /var/tmp/bperf.sock 00:20:04.437 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:20:04.437 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 94244 ']' 00:20:04.437 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:04.437 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:04.437 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:04.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:04.437 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:04.437 16:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:04.437 [2024-07-15 16:05:58.072677] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:20:04.437 [2024-07-15 16:05:58.072790] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94244 ] 00:20:04.694 [2024-07-15 16:05:58.207440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.694 [2024-07-15 16:05:58.324325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:05.627 16:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:05.627 16:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:05.627 16:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:05.628 16:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:05.628 16:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:05.628 16:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.628 16:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:05.628 16:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.628 16:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:05.628 16:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:06.193 nvme0n1 00:20:06.193 16:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:06.193 16:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.193 16:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:06.193 16:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.193 16:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:06.193 16:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:06.193 Running I/O for 2 seconds... 00:20:06.193 [2024-07-15 16:05:59.823843] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190ee5c8 00:20:06.194 [2024-07-15 16:05:59.824785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.194 [2024-07-15 16:05:59.824830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:06.194 [2024-07-15 16:05:59.835037] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e2c28 00:20:06.194 [2024-07-15 16:05:59.835740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.194 [2024-07-15 16:05:59.835781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:06.194 [2024-07-15 16:05:59.848987] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190ecc78 00:20:06.194 [2024-07-15 16:05:59.850524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.194 [2024-07-15 16:05:59.850567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:06.194 [2024-07-15 16:05:59.860076] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f1868 00:20:06.194 [2024-07-15 16:05:59.861352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.194 [2024-07-15 16:05:59.861393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:06.194 [2024-07-15 16:05:59.871557] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e73e0 00:20:06.194 [2024-07-15 16:05:59.872812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.194 [2024-07-15 16:05:59.872854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:06.194 [2024-07-15 16:05:59.883416] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f2510 00:20:06.194 [2024-07-15 16:05:59.884644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.194 [2024-07-15 16:05:59.884680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:06.194 [2024-07-15 16:05:59.894400] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190ef270 00:20:06.194 [2024-07-15 16:05:59.895489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.194 [2024-07-15 16:05:59.895526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:06.194 [2024-07-15 16:05:59.908586] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190feb58 00:20:06.194 [2024-07-15 16:05:59.910541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.194 [2024-07-15 16:05:59.910583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:06.194 [2024-07-15 16:05:59.917016] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190de8a8 00:20:06.194 [2024-07-15 16:05:59.917831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.194 [2024-07-15 16:05:59.917888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:06.452 [2024-07-15 16:05:59.930307] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f9f68 00:20:06.452 [2024-07-15 16:05:59.931597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.452 [2024-07-15 16:05:59.931634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:06.452 [2024-07-15 16:05:59.941515] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f81e0 00:20:06.452 [2024-07-15 16:05:59.942658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.452 [2024-07-15 16:05:59.942712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:06.452 [2024-07-15 16:05:59.952664] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f20d8 00:20:06.452 [2024-07-15 16:05:59.953605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.452 [2024-07-15 16:05:59.953644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:06.452 [2024-07-15 16:05:59.963772] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190df118 00:20:06.452 [2024-07-15 16:05:59.964593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.452 [2024-07-15 16:05:59.964628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:06.452 [2024-07-15 16:05:59.977668] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e5ec8 00:20:06.452 [2024-07-15 16:05:59.978732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.452 [2024-07-15 16:05:59.978778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:06.452 [2024-07-15 16:05:59.988273] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190fb8b8 00:20:06.452 [2024-07-15 16:05:59.989452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.452 [2024-07-15 16:05:59.989493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:06.452 [2024-07-15 16:06:00.002278] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e6738 00:20:06.452 [2024-07-15 16:06:00.004039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.452 [2024-07-15 16:06:00.004075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:06.452 [2024-07-15 16:06:00.014060] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e4de8 00:20:06.452 [2024-07-15 16:06:00.015845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.452 [2024-07-15 16:06:00.015880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:06.452 [2024-07-15 16:06:00.025156] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f46d0 00:20:06.452 [2024-07-15 16:06:00.026775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.452 [2024-07-15 16:06:00.026816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:06.452 [2024-07-15 16:06:00.036176] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f2510 00:20:06.452 [2024-07-15 16:06:00.037666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.452 [2024-07-15 16:06:00.037701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:06.452 [2024-07-15 16:06:00.047273] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190fdeb0 00:20:06.452 [2024-07-15 16:06:00.048592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.453 [2024-07-15 16:06:00.048629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:06.453 [2024-07-15 16:06:00.058209] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f35f0 00:20:06.453 [2024-07-15 16:06:00.059384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.453 [2024-07-15 16:06:00.059421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.453 [2024-07-15 16:06:00.071300] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190dfdc0 00:20:06.453 [2024-07-15 16:06:00.072966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.453 [2024-07-15 16:06:00.073030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:06.453 [2024-07-15 16:06:00.079580] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f7da8 00:20:06.453 [2024-07-15 16:06:00.080309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.453 [2024-07-15 16:06:00.080358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:06.453 [2024-07-15 16:06:00.093761] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190fe2e8 00:20:06.453 [2024-07-15 16:06:00.095014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.453 [2024-07-15 16:06:00.095059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:06.453 [2024-07-15 16:06:00.106969] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f4298 00:20:06.453 [2024-07-15 16:06:00.108690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.453 [2024-07-15 16:06:00.108744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:06.453 [2024-07-15 16:06:00.118215] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e4de8 00:20:06.453 [2024-07-15 16:06:00.119744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.453 [2024-07-15 16:06:00.119781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:06.453 [2024-07-15 16:06:00.129231] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f0ff8 00:20:06.453 [2024-07-15 16:06:00.130587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.453 [2024-07-15 16:06:00.130626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:06.453 [2024-07-15 16:06:00.140876] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190df988 00:20:06.453 [2024-07-15 16:06:00.141775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.453 [2024-07-15 16:06:00.141811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:06.453 [2024-07-15 16:06:00.152530] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f0bc0 00:20:06.453 [2024-07-15 16:06:00.153768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.453 [2024-07-15 16:06:00.153805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:06.453 [2024-07-15 16:06:00.164463] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190fda78 00:20:06.453 [2024-07-15 16:06:00.165536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.453 [2024-07-15 16:06:00.165572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:06.453 [2024-07-15 16:06:00.177808] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190fb048 00:20:06.453 [2024-07-15 16:06:00.179367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.453 [2024-07-15 16:06:00.179404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:06.710 [2024-07-15 16:06:00.188906] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f6020 00:20:06.710 [2024-07-15 16:06:00.190349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.710 [2024-07-15 16:06:00.190398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:06.710 [2024-07-15 16:06:00.202334] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190fd208 00:20:06.710 [2024-07-15 16:06:00.204249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.710 [2024-07-15 16:06:00.204285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:06.710 [2024-07-15 16:06:00.210729] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190ee190 00:20:06.710 [2024-07-15 16:06:00.211730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.710 [2024-07-15 16:06:00.211772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:06.710 [2024-07-15 16:06:00.222792] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f3e60 00:20:06.710 [2024-07-15 16:06:00.223764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.710 [2024-07-15 16:06:00.223800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:06.711 [2024-07-15 16:06:00.236101] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e5220 00:20:06.711 [2024-07-15 16:06:00.237525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.711 [2024-07-15 16:06:00.237558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:06.711 [2024-07-15 16:06:00.247009] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f5378 00:20:06.711 [2024-07-15 16:06:00.248148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.711 [2024-07-15 16:06:00.248189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:06.711 [2024-07-15 16:06:00.258400] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e9e10 00:20:06.711 [2024-07-15 16:06:00.259380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.711 [2024-07-15 16:06:00.259415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:06.711 [2024-07-15 16:06:00.271089] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190fa3a0 00:20:06.711 [2024-07-15 16:06:00.272529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.711 [2024-07-15 16:06:00.272564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:06.711 [2024-07-15 16:06:00.281679] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190eee38 00:20:06.711 [2024-07-15 16:06:00.282871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.711 [2024-07-15 16:06:00.282908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:06.711 [2024-07-15 16:06:00.293043] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e4578 00:20:06.711 [2024-07-15 16:06:00.294274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.711 [2024-07-15 16:06:00.294317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:06.711 [2024-07-15 16:06:00.304990] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190fc128 00:20:06.711 [2024-07-15 16:06:00.306145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.711 [2024-07-15 16:06:00.306185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:06.711 [2024-07-15 16:06:00.316167] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f0ff8 00:20:06.711 [2024-07-15 16:06:00.317193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.711 [2024-07-15 16:06:00.317227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:06.711 [2024-07-15 16:06:00.329732] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f57b0 00:20:06.711 [2024-07-15 16:06:00.331315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.711 [2024-07-15 16:06:00.331369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:06.711 [2024-07-15 16:06:00.340449] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e4de8 00:20:06.711 [2024-07-15 16:06:00.341905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.711 [2024-07-15 16:06:00.341967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:06.711 [2024-07-15 16:06:00.351903] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e27f0 00:20:06.711 [2024-07-15 16:06:00.353241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.711 [2024-07-15 16:06:00.353274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:06.711 [2024-07-15 16:06:00.363694] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190fc560 00:20:06.711 [2024-07-15 16:06:00.364562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.711 [2024-07-15 16:06:00.364601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:06.711 [2024-07-15 16:06:00.375015] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190ebb98 00:20:06.711 [2024-07-15 16:06:00.375793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.711 [2024-07-15 16:06:00.375827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:06.711 [2024-07-15 16:06:00.388142] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190df988 00:20:06.711 [2024-07-15 16:06:00.389649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.711 [2024-07-15 16:06:00.389695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:06.711 [2024-07-15 16:06:00.399178] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190df988 00:20:06.711 [2024-07-15 16:06:00.400518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.711 [2024-07-15 16:06:00.400554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:06.711 [2024-07-15 16:06:00.411135] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190df550 00:20:06.711 [2024-07-15 16:06:00.412152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.711 [2024-07-15 16:06:00.412188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.711 [2024-07-15 16:06:00.422616] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190feb58 00:20:06.711 [2024-07-15 16:06:00.423532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.711 [2024-07-15 16:06:00.423569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:06.711 [2024-07-15 16:06:00.433398] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190df988 00:20:06.711 [2024-07-15 16:06:00.434417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.711 [2024-07-15 16:06:00.434457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.968 [2024-07-15 16:06:00.445187] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190dece0 00:20:06.968 [2024-07-15 16:06:00.446189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.968 [2024-07-15 16:06:00.446229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:06.968 [2024-07-15 16:06:00.456307] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190ec408 00:20:06.968 [2024-07-15 16:06:00.457197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.969 [2024-07-15 16:06:00.457234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:06.969 [2024-07-15 16:06:00.469972] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e01f8 00:20:06.969 [2024-07-15 16:06:00.471555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.969 [2024-07-15 16:06:00.471594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:06.969 [2024-07-15 16:06:00.480589] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f4298 00:20:06.969 [2024-07-15 16:06:00.482404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:17805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.969 [2024-07-15 16:06:00.482444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:06.969 [2024-07-15 16:06:00.493677] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f57b0 00:20:06.969 [2024-07-15 16:06:00.495044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.969 [2024-07-15 16:06:00.495081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:06.969 [2024-07-15 16:06:00.504904] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e4de8 00:20:06.969 [2024-07-15 16:06:00.506121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.969 [2024-07-15 16:06:00.506160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:06.969 [2024-07-15 16:06:00.516086] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f1868 00:20:06.969 [2024-07-15 16:06:00.517140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.969 [2024-07-15 16:06:00.517174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:06.969 [2024-07-15 16:06:00.530401] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f8e88 00:20:06.969 [2024-07-15 16:06:00.532273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.969 [2024-07-15 16:06:00.532307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:06.969 [2024-07-15 16:06:00.538846] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190fbcf0 00:20:06.969 [2024-07-15 16:06:00.539736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.969 [2024-07-15 16:06:00.539770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:06.969 [2024-07-15 16:06:00.553115] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190de470 00:20:06.969 [2024-07-15 16:06:00.554738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.969 [2024-07-15 16:06:00.554777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:06.969 [2024-07-15 16:06:00.564130] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f4298 00:20:06.969 [2024-07-15 16:06:00.565491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.969 [2024-07-15 16:06:00.565527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:06.969 [2024-07-15 16:06:00.575832] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190fc998 00:20:06.969 [2024-07-15 16:06:00.577103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.969 [2024-07-15 16:06:00.577135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:06.969 [2024-07-15 16:06:00.589829] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190dfdc0 00:20:06.969 [2024-07-15 16:06:00.591829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.969 [2024-07-15 16:06:00.591866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:06.969 [2024-07-15 16:06:00.598394] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f3a28 00:20:06.969 [2024-07-15 16:06:00.599239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.969 [2024-07-15 16:06:00.599272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:06.969 [2024-07-15 16:06:00.611928] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190eb760 00:20:06.969 [2024-07-15 16:06:00.613183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.969 [2024-07-15 16:06:00.613218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:06.969 [2024-07-15 16:06:00.625221] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f81e0 00:20:06.969 [2024-07-15 16:06:00.627032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.969 [2024-07-15 16:06:00.627070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:06.969 [2024-07-15 16:06:00.633594] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190fb480 00:20:06.969 [2024-07-15 16:06:00.634495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.969 [2024-07-15 16:06:00.634531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:06.969 [2024-07-15 16:06:00.648030] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e0a68 00:20:06.969 [2024-07-15 16:06:00.649524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.969 [2024-07-15 16:06:00.649559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:06.969 [2024-07-15 16:06:00.660246] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e5220 00:20:06.969 [2024-07-15 16:06:00.661765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.969 [2024-07-15 16:06:00.661801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:06.969 [2024-07-15 16:06:00.671411] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e9168 00:20:06.969 [2024-07-15 16:06:00.672772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.969 [2024-07-15 16:06:00.672808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:06.969 [2024-07-15 16:06:00.683432] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190ebb98 00:20:06.969 [2024-07-15 16:06:00.684459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.969 [2024-07-15 16:06:00.684494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:06.969 [2024-07-15 16:06:00.694949] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190fe2e8 00:20:06.969 [2024-07-15 16:06:00.695882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.969 [2024-07-15 16:06:00.695919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:07.228 [2024-07-15 16:06:00.706323] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f46d0 00:20:07.228 [2024-07-15 16:06:00.707060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:18722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.228 [2024-07-15 16:06:00.707113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:07.228 [2024-07-15 16:06:00.718920] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e2c28 00:20:07.228 [2024-07-15 16:06:00.720465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.228 [2024-07-15 16:06:00.720498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.228 [2024-07-15 16:06:00.729224] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190de470 00:20:07.228 [2024-07-15 16:06:00.730645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.228 [2024-07-15 16:06:00.730683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:07.228 [2024-07-15 16:06:00.740188] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190ec408 00:20:07.228 [2024-07-15 16:06:00.741529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.228 [2024-07-15 16:06:00.741562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:07.228 [2024-07-15 16:06:00.751482] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190fdeb0 00:20:07.228 [2024-07-15 16:06:00.752826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.228 [2024-07-15 16:06:00.752859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:07.228 [2024-07-15 16:06:00.762176] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e01f8 00:20:07.228 [2024-07-15 16:06:00.763387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.228 [2024-07-15 16:06:00.763420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.228 [2024-07-15 16:06:00.772618] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f7da8 00:20:07.228 [2024-07-15 16:06:00.773719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.228 [2024-07-15 16:06:00.773753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.228 [2024-07-15 16:06:00.783404] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e4140 00:20:07.228 [2024-07-15 16:06:00.784502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.228 [2024-07-15 16:06:00.784537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:07.228 [2024-07-15 16:06:00.794576] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190fda78 00:20:07.228 [2024-07-15 16:06:00.795658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.228 [2024-07-15 16:06:00.795693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:07.228 [2024-07-15 16:06:00.807942] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e9e10 00:20:07.228 [2024-07-15 16:06:00.809497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.228 [2024-07-15 16:06:00.809532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:07.228 [2024-07-15 16:06:00.819284] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e6300 00:20:07.228 [2024-07-15 16:06:00.820660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.228 [2024-07-15 16:06:00.820718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:07.228 [2024-07-15 16:06:00.831255] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e6b70 00:20:07.228 [2024-07-15 16:06:00.832492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.228 [2024-07-15 16:06:00.832528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:07.228 [2024-07-15 16:06:00.843044] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e38d0 00:20:07.228 [2024-07-15 16:06:00.844265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.228 [2024-07-15 16:06:00.844301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:07.228 [2024-07-15 16:06:00.856010] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190de8a8 00:20:07.228 [2024-07-15 16:06:00.857704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.228 [2024-07-15 16:06:00.857740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:07.228 [2024-07-15 16:06:00.864173] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190ef270 00:20:07.228 [2024-07-15 16:06:00.864925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.228 [2024-07-15 16:06:00.864968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:07.228 [2024-07-15 16:06:00.877889] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e7c50 00:20:07.228 [2024-07-15 16:06:00.879312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.228 [2024-07-15 16:06:00.879348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:07.228 [2024-07-15 16:06:00.888561] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e6fa8 00:20:07.228 [2024-07-15 16:06:00.889714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.228 [2024-07-15 16:06:00.889753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:07.228 [2024-07-15 16:06:00.899744] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190fb8b8 00:20:07.228 [2024-07-15 16:06:00.900875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.228 [2024-07-15 16:06:00.900911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:07.228 [2024-07-15 16:06:00.913507] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f1868 00:20:07.228 [2024-07-15 16:06:00.915310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.228 [2024-07-15 16:06:00.915348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:07.228 [2024-07-15 16:06:00.925186] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190fa3a0 00:20:07.228 [2024-07-15 16:06:00.926980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.228 [2024-07-15 16:06:00.927017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.228 [2024-07-15 16:06:00.933062] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f6020 00:20:07.228 [2024-07-15 16:06:00.933889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.228 [2024-07-15 16:06:00.933938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:07.228 [2024-07-15 16:06:00.946801] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f5378 00:20:07.228 [2024-07-15 16:06:00.948287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.228 [2024-07-15 16:06:00.948322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:07.486 [2024-07-15 16:06:00.956986] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e27f0 00:20:07.486 [2024-07-15 16:06:00.958785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.486 [2024-07-15 16:06:00.958825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:07.486 [2024-07-15 16:06:00.969295] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190feb58 00:20:07.486 [2024-07-15 16:06:00.970213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.486 [2024-07-15 16:06:00.970265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:07.486 [2024-07-15 16:06:00.980558] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e6fa8 00:20:07.486 [2024-07-15 16:06:00.981310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.486 [2024-07-15 16:06:00.981346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:07.486 [2024-07-15 16:06:00.991866] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e84c0 00:20:07.486 [2024-07-15 16:06:00.992491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.486 [2024-07-15 16:06:00.992527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:07.486 [2024-07-15 16:06:01.004910] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190fd208 00:20:07.487 [2024-07-15 16:06:01.006318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.487 [2024-07-15 16:06:01.006371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:07.487 [2024-07-15 16:06:01.015712] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f6890 00:20:07.487 [2024-07-15 16:06:01.016925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.487 [2024-07-15 16:06:01.016967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:07.487 [2024-07-15 16:06:01.026256] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f3e60 00:20:07.487 [2024-07-15 16:06:01.027379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.487 [2024-07-15 16:06:01.027414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:07.487 [2024-07-15 16:06:01.037287] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e2c28 00:20:07.487 [2024-07-15 16:06:01.038381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.487 [2024-07-15 16:06:01.038418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:07.487 [2024-07-15 16:06:01.050939] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f6020 00:20:07.487 [2024-07-15 16:06:01.052702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.487 [2024-07-15 16:06:01.052736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:07.487 [2024-07-15 16:06:01.062839] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f6890 00:20:07.487 [2024-07-15 16:06:01.064737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.487 [2024-07-15 16:06:01.064770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:07.487 [2024-07-15 16:06:01.071025] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190fda78 00:20:07.487 [2024-07-15 16:06:01.071963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.487 [2024-07-15 16:06:01.071996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:07.487 [2024-07-15 16:06:01.084712] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f81e0 00:20:07.487 [2024-07-15 16:06:01.086155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.487 [2024-07-15 16:06:01.086193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:07.487 [2024-07-15 16:06:01.095581] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190ddc00 00:20:07.487 [2024-07-15 16:06:01.096850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.487 [2024-07-15 16:06:01.096887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:07.487 [2024-07-15 16:06:01.106565] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190ebb98 00:20:07.487 [2024-07-15 16:06:01.107718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.487 [2024-07-15 16:06:01.107755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:07.487 [2024-07-15 16:06:01.118035] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e7c50 00:20:07.487 [2024-07-15 16:06:01.118823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.487 [2024-07-15 16:06:01.118860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:07.487 [2024-07-15 16:06:01.129030] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190ebfd0 00:20:07.487 [2024-07-15 16:06:01.129715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.487 [2024-07-15 16:06:01.129751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:07.487 [2024-07-15 16:06:01.142632] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f3e60 00:20:07.487 [2024-07-15 16:06:01.144369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.487 [2024-07-15 16:06:01.144405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:07.487 [2024-07-15 16:06:01.150802] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190fa7d8 00:20:07.487 [2024-07-15 16:06:01.151617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.487 [2024-07-15 16:06:01.151651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:07.487 [2024-07-15 16:06:01.162536] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e5a90 00:20:07.487 [2024-07-15 16:06:01.163355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.487 [2024-07-15 16:06:01.163391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:07.487 [2024-07-15 16:06:01.176016] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f3a28 00:20:07.487 [2024-07-15 16:06:01.177462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.487 [2024-07-15 16:06:01.177498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:07.487 [2024-07-15 16:06:01.187718] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e3060 00:20:07.487 [2024-07-15 16:06:01.189165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.487 [2024-07-15 16:06:01.189200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:07.487 [2024-07-15 16:06:01.198720] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e38d0 00:20:07.487 [2024-07-15 16:06:01.200034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.487 [2024-07-15 16:06:01.200069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:07.487 [2024-07-15 16:06:01.209663] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e0a68 00:20:07.487 [2024-07-15 16:06:01.210812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.487 [2024-07-15 16:06:01.210851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:07.745 [2024-07-15 16:06:01.220975] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190fd208 00:20:07.745 [2024-07-15 16:06:01.222145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.745 [2024-07-15 16:06:01.222188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:07.745 [2024-07-15 16:06:01.232593] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190fb048 00:20:07.745 [2024-07-15 16:06:01.233277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.745 [2024-07-15 16:06:01.233312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.745 [2024-07-15 16:06:01.244588] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f9f68 00:20:07.745 [2024-07-15 16:06:01.245432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.745 [2024-07-15 16:06:01.245468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:07.745 [2024-07-15 16:06:01.255561] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e6fa8 00:20:07.745 [2024-07-15 16:06:01.256293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.745 [2024-07-15 16:06:01.256328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:07.745 [2024-07-15 16:06:01.268708] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f7970 00:20:07.745 [2024-07-15 16:06:01.270234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.745 [2024-07-15 16:06:01.270278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.745 [2024-07-15 16:06:01.279329] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190fd640 00:20:07.745 [2024-07-15 16:06:01.280664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.745 [2024-07-15 16:06:01.280700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.745 [2024-07-15 16:06:01.290498] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190ecc78 00:20:07.745 [2024-07-15 16:06:01.291842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.745 [2024-07-15 16:06:01.291878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:07.745 [2024-07-15 16:06:01.302193] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f35f0 00:20:07.745 [2024-07-15 16:06:01.303516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.745 [2024-07-15 16:06:01.303554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:07.745 [2024-07-15 16:06:01.313116] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e5ec8 00:20:07.745 [2024-07-15 16:06:01.314325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.745 [2024-07-15 16:06:01.314368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:07.745 [2024-07-15 16:06:01.324071] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e0a68 00:20:07.745 [2024-07-15 16:06:01.325089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.745 [2024-07-15 16:06:01.325125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:07.745 [2024-07-15 16:06:01.335308] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190fc998 00:20:07.745 [2024-07-15 16:06:01.336195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.745 [2024-07-15 16:06:01.336230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:07.745 [2024-07-15 16:06:01.346289] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f6458 00:20:07.746 [2024-07-15 16:06:01.347002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.746 [2024-07-15 16:06:01.347038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:07.746 [2024-07-15 16:06:01.360806] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e6b70 00:20:07.746 [2024-07-15 16:06:01.362506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.746 [2024-07-15 16:06:01.362543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:07.746 [2024-07-15 16:06:01.371835] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190dece0 00:20:07.746 [2024-07-15 16:06:01.373384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.746 [2024-07-15 16:06:01.373429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:07.746 [2024-07-15 16:06:01.382817] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e73e0 00:20:07.746 [2024-07-15 16:06:01.384195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.746 [2024-07-15 16:06:01.384228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:07.746 [2024-07-15 16:06:01.393856] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e5220 00:20:07.746 [2024-07-15 16:06:01.395106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.746 [2024-07-15 16:06:01.395143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:07.746 [2024-07-15 16:06:01.404893] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f3e60 00:20:07.746 [2024-07-15 16:06:01.405995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.746 [2024-07-15 16:06:01.406032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:07.746 [2024-07-15 16:06:01.416245] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190fb8b8 00:20:07.746 [2024-07-15 16:06:01.417328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.746 [2024-07-15 16:06:01.417364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:07.746 [2024-07-15 16:06:01.430099] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f0350 00:20:07.746 [2024-07-15 16:06:01.431813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.746 [2024-07-15 16:06:01.431848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:07.746 [2024-07-15 16:06:01.438374] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f31b8 00:20:07.746 [2024-07-15 16:06:01.439168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.746 [2024-07-15 16:06:01.439203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:07.746 [2024-07-15 16:06:01.452192] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e01f8 00:20:07.746 [2024-07-15 16:06:01.453626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.746 [2024-07-15 16:06:01.453661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:07.746 [2024-07-15 16:06:01.463769] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190df988 00:20:07.746 [2024-07-15 16:06:01.464735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.746 [2024-07-15 16:06:01.464771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:08.004 [2024-07-15 16:06:01.475147] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e9e10 00:20:08.004 [2024-07-15 16:06:01.476448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.004 [2024-07-15 16:06:01.476485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:08.004 [2024-07-15 16:06:01.486332] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190eee38 00:20:08.004 [2024-07-15 16:06:01.487628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.004 [2024-07-15 16:06:01.487663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:08.004 [2024-07-15 16:06:01.496978] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190de038 00:20:08.004 [2024-07-15 16:06:01.498035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.004 [2024-07-15 16:06:01.498071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:08.004 [2024-07-15 16:06:01.508117] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e2c28 00:20:08.004 [2024-07-15 16:06:01.509130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.004 [2024-07-15 16:06:01.509164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:08.004 [2024-07-15 16:06:01.521817] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190ebfd0 00:20:08.004 [2024-07-15 16:06:01.523480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.004 [2024-07-15 16:06:01.523517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:08.004 [2024-07-15 16:06:01.529971] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e9e10 00:20:08.004 [2024-07-15 16:06:01.530709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.004 [2024-07-15 16:06:01.530746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:08.004 [2024-07-15 16:06:01.541567] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190fc998 00:20:08.004 [2024-07-15 16:06:01.542299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.004 [2024-07-15 16:06:01.542349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:08.004 [2024-07-15 16:06:01.554571] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f8e88 00:20:08.004 [2024-07-15 16:06:01.555929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.004 [2024-07-15 16:06:01.555977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:08.004 [2024-07-15 16:06:01.565808] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f57b0 00:20:08.004 [2024-07-15 16:06:01.567020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.004 [2024-07-15 16:06:01.567057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:08.004 [2024-07-15 16:06:01.579467] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190ecc78 00:20:08.004 [2024-07-15 16:06:01.581321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.004 [2024-07-15 16:06:01.581356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:08.004 [2024-07-15 16:06:01.587618] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f6cc8 00:20:08.004 [2024-07-15 16:06:01.588383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.004 [2024-07-15 16:06:01.588419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:08.004 [2024-07-15 16:06:01.601943] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e3498 00:20:08.004 [2024-07-15 16:06:01.603666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.004 [2024-07-15 16:06:01.603702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:08.004 [2024-07-15 16:06:01.609752] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e5ec8 00:20:08.004 [2024-07-15 16:06:01.610516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.004 [2024-07-15 16:06:01.610554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:08.004 [2024-07-15 16:06:01.621362] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f0ff8 00:20:08.004 [2024-07-15 16:06:01.622119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.004 [2024-07-15 16:06:01.622160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:08.004 [2024-07-15 16:06:01.634767] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f2510 00:20:08.004 [2024-07-15 16:06:01.636169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.004 [2024-07-15 16:06:01.636204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:08.004 [2024-07-15 16:06:01.646244] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e0ea0 00:20:08.004 [2024-07-15 16:06:01.647158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.004 [2024-07-15 16:06:01.647194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:08.004 [2024-07-15 16:06:01.657193] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f1ca0 00:20:08.004 [2024-07-15 16:06:01.658008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.004 [2024-07-15 16:06:01.658043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:08.004 [2024-07-15 16:06:01.668041] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f9f68 00:20:08.004 [2024-07-15 16:06:01.668636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.004 [2024-07-15 16:06:01.668671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:08.004 [2024-07-15 16:06:01.681091] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190ed920 00:20:08.004 [2024-07-15 16:06:01.682486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.004 [2024-07-15 16:06:01.682525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:08.004 [2024-07-15 16:06:01.691997] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190efae0 00:20:08.004 [2024-07-15 16:06:01.693244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:10397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.004 [2024-07-15 16:06:01.693278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:08.004 [2024-07-15 16:06:01.704652] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f20d8 00:20:08.004 [2024-07-15 16:06:01.706385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.004 [2024-07-15 16:06:01.706424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:08.004 [2024-07-15 16:06:01.714079] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190ecc78 00:20:08.004 [2024-07-15 16:06:01.714867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.004 [2024-07-15 16:06:01.714902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:08.004 [2024-07-15 16:06:01.725708] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190df118 00:20:08.004 [2024-07-15 16:06:01.726987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.004 [2024-07-15 16:06:01.727034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:08.273 [2024-07-15 16:06:01.739406] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190edd58 00:20:08.273 [2024-07-15 16:06:01.741304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.273 [2024-07-15 16:06:01.741339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:08.273 [2024-07-15 16:06:01.747514] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190eaef0 00:20:08.273 [2024-07-15 16:06:01.748480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.273 [2024-07-15 16:06:01.748515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:08.273 [2024-07-15 16:06:01.759205] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190f7970 00:20:08.273 [2024-07-15 16:06:01.760173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.273 [2024-07-15 16:06:01.760209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:08.273 [2024-07-15 16:06:01.770079] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190df118 00:20:08.273 [2024-07-15 16:06:01.770901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.273 [2024-07-15 16:06:01.770936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:08.273 [2024-07-15 16:06:01.783976] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190fa3a0 00:20:08.273 [2024-07-15 16:06:01.785578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.273 [2024-07-15 16:06:01.785614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:08.273 [2024-07-15 16:06:01.794648] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190e84c0 00:20:08.273 [2024-07-15 16:06:01.795990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.273 [2024-07-15 16:06:01.796025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.273 [2024-07-15 16:06:01.805760] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08c70) with pdu=0x2000190fd640 00:20:08.273 [2024-07-15 16:06:01.807114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.273 [2024-07-15 16:06:01.807152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:08.273 00:20:08.273 Latency(us) 00:20:08.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:08.274 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:08.274 nvme0n1 : 2.00 21923.57 85.64 0.00 0.00 5832.07 2323.55 15490.33 00:20:08.274 =================================================================================================================== 00:20:08.274 Total : 21923.57 85.64 0.00 0.00 5832.07 2323.55 15490.33 00:20:08.274 0 00:20:08.274 16:06:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:08.274 16:06:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:08.274 16:06:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:08.274 | .driver_specific 00:20:08.274 | .nvme_error 00:20:08.274 | .status_code 00:20:08.274 | .command_transient_transport_error' 00:20:08.274 16:06:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:08.558 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 172 > 0 )) 00:20:08.558 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94244 00:20:08.558 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 94244 ']' 00:20:08.558 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 94244 00:20:08.558 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:08.558 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:08.558 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94244 00:20:08.558 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:08.558 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:08.558 killing process with pid 94244 00:20:08.558 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94244' 00:20:08.558 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 94244 00:20:08.559 Received shutdown signal, test time was about 2.000000 seconds 00:20:08.559 00:20:08.559 Latency(us) 00:20:08.559 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:08.559 =================================================================================================================== 00:20:08.559 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:08.559 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 94244 00:20:08.816 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:20:08.816 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:08.816 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:08.816 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:08.816 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:08.816 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94329 00:20:08.816 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:20:08.816 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94329 /var/tmp/bperf.sock 00:20:08.816 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 94329 ']' 00:20:08.816 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:08.816 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:08.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:08.816 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:08.816 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:08.816 16:06:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:08.816 [2024-07-15 16:06:02.362434] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:20:08.816 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:08.816 Zero copy mechanism will not be used. 00:20:08.816 [2024-07-15 16:06:02.362536] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94329 ] 00:20:08.816 [2024-07-15 16:06:02.495287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.073 [2024-07-15 16:06:02.601818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.638 16:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:09.638 16:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:09.638 16:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:09.638 16:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:09.895 16:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:09.895 16:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.895 16:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:09.895 16:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.895 16:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:09.895 16:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:10.153 nvme0n1 00:20:10.153 16:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:10.153 16:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.153 16:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:10.153 16:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.153 16:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:10.153 16:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:10.409 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:10.409 Zero copy mechanism will not be used. 00:20:10.409 Running I/O for 2 seconds... 00:20:10.409 [2024-07-15 16:06:03.975249] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.409 [2024-07-15 16:06:03.975583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.409 [2024-07-15 16:06:03.975624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:10.409 [2024-07-15 16:06:03.980303] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.409 [2024-07-15 16:06:03.980616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.409 [2024-07-15 16:06:03.980656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:10.409 [2024-07-15 16:06:03.985320] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.409 [2024-07-15 16:06:03.985639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.409 [2024-07-15 16:06:03.985680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:10.409 [2024-07-15 16:06:03.990457] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.409 [2024-07-15 16:06:03.990760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.409 [2024-07-15 16:06:03.990799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.409 [2024-07-15 16:06:03.995486] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.409 [2024-07-15 16:06:03.995776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.409 [2024-07-15 16:06:03.995816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:10.409 [2024-07-15 16:06:04.000558] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.409 [2024-07-15 16:06:04.000861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.409 [2024-07-15 16:06:04.000902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:10.409 [2024-07-15 16:06:04.005620] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.409 [2024-07-15 16:06:04.005933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.409 [2024-07-15 16:06:04.005985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:10.409 [2024-07-15 16:06:04.010690] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.409 [2024-07-15 16:06:04.011007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.409 [2024-07-15 16:06:04.011057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.409 [2024-07-15 16:06:04.015673] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.409 [2024-07-15 16:06:04.015972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.409 [2024-07-15 16:06:04.016010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:10.409 [2024-07-15 16:06:04.020746] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.409 [2024-07-15 16:06:04.021063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.409 [2024-07-15 16:06:04.021101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:10.409 [2024-07-15 16:06:04.025860] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.409 [2024-07-15 16:06:04.026187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.409 [2024-07-15 16:06:04.026229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:10.409 [2024-07-15 16:06:04.030926] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.409 [2024-07-15 16:06:04.031244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.409 [2024-07-15 16:06:04.031284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.409 [2024-07-15 16:06:04.035941] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.410 [2024-07-15 16:06:04.036241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.410 [2024-07-15 16:06:04.036281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:10.410 [2024-07-15 16:06:04.040952] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.410 [2024-07-15 16:06:04.041268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.410 [2024-07-15 16:06:04.041306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:10.410 [2024-07-15 16:06:04.045990] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.410 [2024-07-15 16:06:04.046280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.410 [2024-07-15 16:06:04.046318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:10.410 [2024-07-15 16:06:04.050919] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.410 [2024-07-15 16:06:04.051244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.410 [2024-07-15 16:06:04.051283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.410 [2024-07-15 16:06:04.056071] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.410 [2024-07-15 16:06:04.056408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.410 [2024-07-15 16:06:04.056446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:10.410 [2024-07-15 16:06:04.061273] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.410 [2024-07-15 16:06:04.061604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.410 [2024-07-15 16:06:04.061643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:10.410 [2024-07-15 16:06:04.066261] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.410 [2024-07-15 16:06:04.066575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.410 [2024-07-15 16:06:04.066613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:10.410 [2024-07-15 16:06:04.071307] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.410 [2024-07-15 16:06:04.071638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.410 [2024-07-15 16:06:04.071677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.410 [2024-07-15 16:06:04.076481] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.410 [2024-07-15 16:06:04.076786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.410 [2024-07-15 16:06:04.076823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:10.410 [2024-07-15 16:06:04.081626] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.410 [2024-07-15 16:06:04.081979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.410 [2024-07-15 16:06:04.082016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:10.410 [2024-07-15 16:06:04.086795] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.410 [2024-07-15 16:06:04.087136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.410 [2024-07-15 16:06:04.087173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:10.410 [2024-07-15 16:06:04.091950] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.410 [2024-07-15 16:06:04.092250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.410 [2024-07-15 16:06:04.092288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.410 [2024-07-15 16:06:04.096964] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.410 [2024-07-15 16:06:04.097281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.410 [2024-07-15 16:06:04.097320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:10.410 [2024-07-15 16:06:04.102009] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.410 [2024-07-15 16:06:04.102301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.410 [2024-07-15 16:06:04.102338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:10.410 [2024-07-15 16:06:04.107107] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.410 [2024-07-15 16:06:04.107410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.410 [2024-07-15 16:06:04.107447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:10.410 [2024-07-15 16:06:04.112210] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.410 [2024-07-15 16:06:04.112522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.410 [2024-07-15 16:06:04.112559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.410 [2024-07-15 16:06:04.117290] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.410 [2024-07-15 16:06:04.117579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.410 [2024-07-15 16:06:04.117617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:10.410 [2024-07-15 16:06:04.122360] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.410 [2024-07-15 16:06:04.122661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.410 [2024-07-15 16:06:04.122705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:10.410 [2024-07-15 16:06:04.127334] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.410 [2024-07-15 16:06:04.127635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.410 [2024-07-15 16:06:04.127675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:10.410 [2024-07-15 16:06:04.132341] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.410 [2024-07-15 16:06:04.132641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.410 [2024-07-15 16:06:04.132679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.668 [2024-07-15 16:06:04.137346] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.668 [2024-07-15 16:06:04.137659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.668 [2024-07-15 16:06:04.137697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:10.668 [2024-07-15 16:06:04.142390] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.668 [2024-07-15 16:06:04.142691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.668 [2024-07-15 16:06:04.142728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:10.668 [2024-07-15 16:06:04.147385] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.668 [2024-07-15 16:06:04.147673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.668 [2024-07-15 16:06:04.147712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:10.669 [2024-07-15 16:06:04.152334] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.669 [2024-07-15 16:06:04.152622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.669 [2024-07-15 16:06:04.152661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.669 [2024-07-15 16:06:04.157260] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.669 [2024-07-15 16:06:04.157548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.669 [2024-07-15 16:06:04.157586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:10.669 [2024-07-15 16:06:04.162217] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.669 [2024-07-15 16:06:04.162507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.669 [2024-07-15 16:06:04.162545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:10.669 [2024-07-15 16:06:04.167157] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.669 [2024-07-15 16:06:04.167444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.669 [2024-07-15 16:06:04.167482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:10.669 [2024-07-15 16:06:04.172180] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.669 [2024-07-15 16:06:04.172469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.669 [2024-07-15 16:06:04.172507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.669 [2024-07-15 16:06:04.177166] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.669 [2024-07-15 16:06:04.177454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.669 [2024-07-15 16:06:04.177492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:10.669 [2024-07-15 16:06:04.182175] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.669 [2024-07-15 16:06:04.182493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.669 [2024-07-15 16:06:04.182530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:10.669 [2024-07-15 16:06:04.187180] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.669 [2024-07-15 16:06:04.187468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.669 [2024-07-15 16:06:04.187507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:10.669 [2024-07-15 16:06:04.192155] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.669 [2024-07-15 16:06:04.192440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.669 [2024-07-15 16:06:04.192483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.669 [2024-07-15 16:06:04.197131] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.669 [2024-07-15 16:06:04.197425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.669 [2024-07-15 16:06:04.197462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:10.669 [2024-07-15 16:06:04.202230] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.669 [2024-07-15 16:06:04.202570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.669 [2024-07-15 16:06:04.202608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:10.669 [2024-07-15 16:06:04.207271] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.669 [2024-07-15 16:06:04.207571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.669 [2024-07-15 16:06:04.207610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:10.669 [2024-07-15 16:06:04.212316] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.669 [2024-07-15 16:06:04.212618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.669 [2024-07-15 16:06:04.212657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.669 [2024-07-15 16:06:04.217311] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.669 [2024-07-15 16:06:04.217628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.669 [2024-07-15 16:06:04.217672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:10.669 [2024-07-15 16:06:04.222363] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.669 [2024-07-15 16:06:04.222675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.669 [2024-07-15 16:06:04.222712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:10.669 [2024-07-15 16:06:04.227328] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.669 [2024-07-15 16:06:04.227627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.669 [2024-07-15 16:06:04.227665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:10.669 [2024-07-15 16:06:04.232379] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.669 [2024-07-15 16:06:04.232680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.669 [2024-07-15 16:06:04.232719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.669 [2024-07-15 16:06:04.237430] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.669 [2024-07-15 16:06:04.237730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.669 [2024-07-15 16:06:04.237770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:10.669 [2024-07-15 16:06:04.242421] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.669 [2024-07-15 16:06:04.242724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.669 [2024-07-15 16:06:04.242762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:10.669 [2024-07-15 16:06:04.247400] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.669 [2024-07-15 16:06:04.247718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.669 [2024-07-15 16:06:04.247756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:10.669 [2024-07-15 16:06:04.252362] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.669 [2024-07-15 16:06:04.252672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.669 [2024-07-15 16:06:04.252710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.669 [2024-07-15 16:06:04.257256] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.669 [2024-07-15 16:06:04.257590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.669 [2024-07-15 16:06:04.257628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:10.669 [2024-07-15 16:06:04.262276] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.669 [2024-07-15 16:06:04.262566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.669 [2024-07-15 16:06:04.262604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:10.669 [2024-07-15 16:06:04.267214] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.669 [2024-07-15 16:06:04.267527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.669 [2024-07-15 16:06:04.267566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:10.669 [2024-07-15 16:06:04.272151] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.669 [2024-07-15 16:06:04.272467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.669 [2024-07-15 16:06:04.272506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.669 [2024-07-15 16:06:04.277095] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.669 [2024-07-15 16:06:04.277397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.669 [2024-07-15 16:06:04.277435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:10.669 [2024-07-15 16:06:04.282154] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.669 [2024-07-15 16:06:04.282473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.669 [2024-07-15 16:06:04.282510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:10.669 [2024-07-15 16:06:04.287074] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.669 [2024-07-15 16:06:04.287387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.669 [2024-07-15 16:06:04.287425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:10.669 [2024-07-15 16:06:04.292104] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.670 [2024-07-15 16:06:04.292420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.670 [2024-07-15 16:06:04.292458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.670 [2024-07-15 16:06:04.297056] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.670 [2024-07-15 16:06:04.297343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.670 [2024-07-15 16:06:04.297381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:10.670 [2024-07-15 16:06:04.301859] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.670 [2024-07-15 16:06:04.302195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.670 [2024-07-15 16:06:04.302234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:10.670 [2024-07-15 16:06:04.306878] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.670 [2024-07-15 16:06:04.307191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.670 [2024-07-15 16:06:04.307240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:10.670 [2024-07-15 16:06:04.311811] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.670 [2024-07-15 16:06:04.312161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.670 [2024-07-15 16:06:04.312211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.670 [2024-07-15 16:06:04.316939] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.670 [2024-07-15 16:06:04.317290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.670 [2024-07-15 16:06:04.317328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:10.670 [2024-07-15 16:06:04.322136] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.670 [2024-07-15 16:06:04.322507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.670 [2024-07-15 16:06:04.322544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:10.670 [2024-07-15 16:06:04.327340] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.670 [2024-07-15 16:06:04.327670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.670 [2024-07-15 16:06:04.327724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:10.670 [2024-07-15 16:06:04.332575] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.670 [2024-07-15 16:06:04.332884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.670 [2024-07-15 16:06:04.332921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.670 [2024-07-15 16:06:04.337648] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.670 [2024-07-15 16:06:04.338022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.670 [2024-07-15 16:06:04.338059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:10.670 [2024-07-15 16:06:04.342800] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.670 [2024-07-15 16:06:04.343128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.670 [2024-07-15 16:06:04.343164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:10.670 [2024-07-15 16:06:04.347778] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.670 [2024-07-15 16:06:04.348104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.670 [2024-07-15 16:06:04.348141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:10.670 [2024-07-15 16:06:04.352842] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.670 [2024-07-15 16:06:04.353156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.670 [2024-07-15 16:06:04.353194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.670 [2024-07-15 16:06:04.357891] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.670 [2024-07-15 16:06:04.358226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.670 [2024-07-15 16:06:04.358265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:10.670 [2024-07-15 16:06:04.362847] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.670 [2024-07-15 16:06:04.363168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.670 [2024-07-15 16:06:04.363205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:10.670 [2024-07-15 16:06:04.367902] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.670 [2024-07-15 16:06:04.368252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.670 [2024-07-15 16:06:04.368301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:10.670 [2024-07-15 16:06:04.373001] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.670 [2024-07-15 16:06:04.373313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.670 [2024-07-15 16:06:04.373350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.670 [2024-07-15 16:06:04.378032] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.670 [2024-07-15 16:06:04.378334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.670 [2024-07-15 16:06:04.378372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:10.670 [2024-07-15 16:06:04.383067] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.670 [2024-07-15 16:06:04.383392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.670 [2024-07-15 16:06:04.383429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:10.670 [2024-07-15 16:06:04.388017] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.670 [2024-07-15 16:06:04.388347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.670 [2024-07-15 16:06:04.388384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:10.670 [2024-07-15 16:06:04.392845] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.670 [2024-07-15 16:06:04.393191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.670 [2024-07-15 16:06:04.393228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.928 [2024-07-15 16:06:04.397697] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.929 [2024-07-15 16:06:04.398046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.929 [2024-07-15 16:06:04.398087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:10.929 [2024-07-15 16:06:04.402626] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.929 [2024-07-15 16:06:04.402940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.929 [2024-07-15 16:06:04.402989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:10.929 [2024-07-15 16:06:04.407622] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.929 [2024-07-15 16:06:04.407936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.929 [2024-07-15 16:06:04.407984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:10.929 [2024-07-15 16:06:04.412480] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.929 [2024-07-15 16:06:04.412795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.929 [2024-07-15 16:06:04.412833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.929 [2024-07-15 16:06:04.417337] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.929 [2024-07-15 16:06:04.417680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.929 [2024-07-15 16:06:04.417720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:10.929 [2024-07-15 16:06:04.422418] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.929 [2024-07-15 16:06:04.422746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.929 [2024-07-15 16:06:04.422785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:10.929 [2024-07-15 16:06:04.427568] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.929 [2024-07-15 16:06:04.427855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.929 [2024-07-15 16:06:04.427893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:10.929 [2024-07-15 16:06:04.432715] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.929 [2024-07-15 16:06:04.433045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.929 [2024-07-15 16:06:04.433088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.929 [2024-07-15 16:06:04.437811] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.929 [2024-07-15 16:06:04.438124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.929 [2024-07-15 16:06:04.438165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:10.929 [2024-07-15 16:06:04.443056] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.929 [2024-07-15 16:06:04.443388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.929 [2024-07-15 16:06:04.443426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:10.929 [2024-07-15 16:06:04.448230] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.929 [2024-07-15 16:06:04.448542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.929 [2024-07-15 16:06:04.448580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:10.929 [2024-07-15 16:06:04.453405] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.929 [2024-07-15 16:06:04.453692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.929 [2024-07-15 16:06:04.453730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.929 [2024-07-15 16:06:04.458421] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.929 [2024-07-15 16:06:04.458738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.929 [2024-07-15 16:06:04.458776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:10.929 [2024-07-15 16:06:04.463660] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.929 [2024-07-15 16:06:04.464005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.929 [2024-07-15 16:06:04.464073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:10.929 [2024-07-15 16:06:04.468911] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.929 [2024-07-15 16:06:04.469263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.929 [2024-07-15 16:06:04.469301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:10.929 [2024-07-15 16:06:04.474038] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.929 [2024-07-15 16:06:04.474330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.929 [2024-07-15 16:06:04.474367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.929 [2024-07-15 16:06:04.479134] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.929 [2024-07-15 16:06:04.479442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.929 [2024-07-15 16:06:04.479479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:10.929 [2024-07-15 16:06:04.484322] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.929 [2024-07-15 16:06:04.484619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.929 [2024-07-15 16:06:04.484657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:10.929 [2024-07-15 16:06:04.489429] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.929 [2024-07-15 16:06:04.489714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.929 [2024-07-15 16:06:04.489752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:10.929 [2024-07-15 16:06:04.494485] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.929 [2024-07-15 16:06:04.494774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.929 [2024-07-15 16:06:04.494813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.929 [2024-07-15 16:06:04.499655] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.929 [2024-07-15 16:06:04.499977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.929 [2024-07-15 16:06:04.500028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:10.929 [2024-07-15 16:06:04.504869] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.929 [2024-07-15 16:06:04.505197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.929 [2024-07-15 16:06:04.505235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:10.929 [2024-07-15 16:06:04.510011] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.929 [2024-07-15 16:06:04.510302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.929 [2024-07-15 16:06:04.510340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:10.929 [2024-07-15 16:06:04.515126] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.929 [2024-07-15 16:06:04.515442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.929 [2024-07-15 16:06:04.515479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.929 [2024-07-15 16:06:04.520201] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.929 [2024-07-15 16:06:04.520516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.929 [2024-07-15 16:06:04.520553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:10.929 [2024-07-15 16:06:04.525216] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.929 [2024-07-15 16:06:04.525532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.929 [2024-07-15 16:06:04.525569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:10.929 [2024-07-15 16:06:04.530209] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.929 [2024-07-15 16:06:04.530556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.929 [2024-07-15 16:06:04.530594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:10.929 [2024-07-15 16:06:04.535315] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.929 [2024-07-15 16:06:04.535638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.929 [2024-07-15 16:06:04.535677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.929 [2024-07-15 16:06:04.540234] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.929 [2024-07-15 16:06:04.540547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.929 [2024-07-15 16:06:04.540585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:10.929 [2024-07-15 16:06:04.545090] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.930 [2024-07-15 16:06:04.545404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.930 [2024-07-15 16:06:04.545440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:10.930 [2024-07-15 16:06:04.549996] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.930 [2024-07-15 16:06:04.550309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.930 [2024-07-15 16:06:04.550347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:10.930 [2024-07-15 16:06:04.554841] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.930 [2024-07-15 16:06:04.555180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.930 [2024-07-15 16:06:04.555218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.930 [2024-07-15 16:06:04.559758] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.930 [2024-07-15 16:06:04.560103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.930 [2024-07-15 16:06:04.560141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:10.930 [2024-07-15 16:06:04.564733] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.930 [2024-07-15 16:06:04.565077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.930 [2024-07-15 16:06:04.565116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:10.930 [2024-07-15 16:06:04.569646] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.930 [2024-07-15 16:06:04.569986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.930 [2024-07-15 16:06:04.570023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:10.930 [2024-07-15 16:06:04.574610] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.930 [2024-07-15 16:06:04.574936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.930 [2024-07-15 16:06:04.574985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.930 [2024-07-15 16:06:04.579681] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.930 [2024-07-15 16:06:04.580002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.930 [2024-07-15 16:06:04.580050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:10.930 [2024-07-15 16:06:04.584636] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.930 [2024-07-15 16:06:04.585001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.930 [2024-07-15 16:06:04.585053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:10.930 [2024-07-15 16:06:04.590010] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.930 [2024-07-15 16:06:04.590300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.930 [2024-07-15 16:06:04.590338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:10.930 [2024-07-15 16:06:04.595150] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.930 [2024-07-15 16:06:04.595494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.930 [2024-07-15 16:06:04.595533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.930 [2024-07-15 16:06:04.600343] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.930 [2024-07-15 16:06:04.600673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.930 [2024-07-15 16:06:04.600711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:10.930 [2024-07-15 16:06:04.605320] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.930 [2024-07-15 16:06:04.605676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.930 [2024-07-15 16:06:04.605729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:10.930 [2024-07-15 16:06:04.610436] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.930 [2024-07-15 16:06:04.610755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.930 [2024-07-15 16:06:04.610793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:10.930 [2024-07-15 16:06:04.615464] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.930 [2024-07-15 16:06:04.615767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.930 [2024-07-15 16:06:04.615819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.930 [2024-07-15 16:06:04.620568] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.930 [2024-07-15 16:06:04.620914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.930 [2024-07-15 16:06:04.620951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:10.930 [2024-07-15 16:06:04.625541] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.930 [2024-07-15 16:06:04.625865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.930 [2024-07-15 16:06:04.625929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:10.930 [2024-07-15 16:06:04.630687] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.930 [2024-07-15 16:06:04.631020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.930 [2024-07-15 16:06:04.631070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:10.930 [2024-07-15 16:06:04.635793] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.930 [2024-07-15 16:06:04.636124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.930 [2024-07-15 16:06:04.636160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.930 [2024-07-15 16:06:04.640653] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.930 [2024-07-15 16:06:04.640970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.930 [2024-07-15 16:06:04.641019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:10.930 [2024-07-15 16:06:04.645450] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.930 [2024-07-15 16:06:04.645762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.930 [2024-07-15 16:06:04.645801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:10.930 [2024-07-15 16:06:04.650380] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.930 [2024-07-15 16:06:04.650685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.930 [2024-07-15 16:06:04.650722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:10.930 [2024-07-15 16:06:04.655146] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:10.930 [2024-07-15 16:06:04.655448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.930 [2024-07-15 16:06:04.655484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.189 [2024-07-15 16:06:04.659889] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.189 [2024-07-15 16:06:04.660220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.189 [2024-07-15 16:06:04.660256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.189 [2024-07-15 16:06:04.664755] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.189 [2024-07-15 16:06:04.665109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.189 [2024-07-15 16:06:04.665146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.189 [2024-07-15 16:06:04.669472] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.189 [2024-07-15 16:06:04.669788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.189 [2024-07-15 16:06:04.669825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.189 [2024-07-15 16:06:04.674279] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.189 [2024-07-15 16:06:04.674614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.189 [2024-07-15 16:06:04.674651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.189 [2024-07-15 16:06:04.679125] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.189 [2024-07-15 16:06:04.679451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.189 [2024-07-15 16:06:04.679486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.189 [2024-07-15 16:06:04.683911] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.189 [2024-07-15 16:06:04.684230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.189 [2024-07-15 16:06:04.684266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.189 [2024-07-15 16:06:04.688669] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.189 [2024-07-15 16:06:04.688990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.189 [2024-07-15 16:06:04.689038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.189 [2024-07-15 16:06:04.693386] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.189 [2024-07-15 16:06:04.693689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.189 [2024-07-15 16:06:04.693726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.189 [2024-07-15 16:06:04.698145] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.189 [2024-07-15 16:06:04.698472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.189 [2024-07-15 16:06:04.698508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.189 [2024-07-15 16:06:04.703131] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.189 [2024-07-15 16:06:04.703444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.189 [2024-07-15 16:06:04.703481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.189 [2024-07-15 16:06:04.707947] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.189 [2024-07-15 16:06:04.708285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.189 [2024-07-15 16:06:04.708327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.189 [2024-07-15 16:06:04.713040] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.189 [2024-07-15 16:06:04.713345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.189 [2024-07-15 16:06:04.713382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.189 [2024-07-15 16:06:04.718121] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.189 [2024-07-15 16:06:04.718410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.189 [2024-07-15 16:06:04.718447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.189 [2024-07-15 16:06:04.723159] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.189 [2024-07-15 16:06:04.723453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.189 [2024-07-15 16:06:04.723490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.189 [2024-07-15 16:06:04.728181] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.189 [2024-07-15 16:06:04.728468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.189 [2024-07-15 16:06:04.728507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.189 [2024-07-15 16:06:04.733376] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.189 [2024-07-15 16:06:04.733691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.189 [2024-07-15 16:06:04.733730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.189 [2024-07-15 16:06:04.738356] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.189 [2024-07-15 16:06:04.738671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.189 [2024-07-15 16:06:04.738709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.189 [2024-07-15 16:06:04.743451] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.189 [2024-07-15 16:06:04.743764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.189 [2024-07-15 16:06:04.743802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.189 [2024-07-15 16:06:04.748524] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.189 [2024-07-15 16:06:04.748839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.189 [2024-07-15 16:06:04.748877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.189 [2024-07-15 16:06:04.753643] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.189 [2024-07-15 16:06:04.753998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.189 [2024-07-15 16:06:04.754037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.189 [2024-07-15 16:06:04.758739] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.189 [2024-07-15 16:06:04.759077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.189 [2024-07-15 16:06:04.759115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.189 [2024-07-15 16:06:04.763782] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.189 [2024-07-15 16:06:04.764116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.189 [2024-07-15 16:06:04.764154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.189 [2024-07-15 16:06:04.768787] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.189 [2024-07-15 16:06:04.769127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.189 [2024-07-15 16:06:04.769164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.189 [2024-07-15 16:06:04.774125] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.189 [2024-07-15 16:06:04.774474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.189 [2024-07-15 16:06:04.774511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.189 [2024-07-15 16:06:04.779092] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.189 [2024-07-15 16:06:04.779414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.189 [2024-07-15 16:06:04.779452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.189 [2024-07-15 16:06:04.783926] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.189 [2024-07-15 16:06:04.784250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.189 [2024-07-15 16:06:04.784288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.190 [2024-07-15 16:06:04.788877] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.190 [2024-07-15 16:06:04.789240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.190 [2024-07-15 16:06:04.789276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.190 [2024-07-15 16:06:04.793962] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.190 [2024-07-15 16:06:04.794263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.190 [2024-07-15 16:06:04.794300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.190 [2024-07-15 16:06:04.798942] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.190 [2024-07-15 16:06:04.799288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.190 [2024-07-15 16:06:04.799327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.190 [2024-07-15 16:06:04.803856] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.190 [2024-07-15 16:06:04.804199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.190 [2024-07-15 16:06:04.804235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.190 [2024-07-15 16:06:04.808975] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.190 [2024-07-15 16:06:04.809321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.190 [2024-07-15 16:06:04.809359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.190 [2024-07-15 16:06:04.814249] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.190 [2024-07-15 16:06:04.814601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.190 [2024-07-15 16:06:04.814641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.190 [2024-07-15 16:06:04.819336] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.190 [2024-07-15 16:06:04.819696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.190 [2024-07-15 16:06:04.819735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.190 [2024-07-15 16:06:04.824449] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.190 [2024-07-15 16:06:04.824770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.190 [2024-07-15 16:06:04.824809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.190 [2024-07-15 16:06:04.829634] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.190 [2024-07-15 16:06:04.829983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.190 [2024-07-15 16:06:04.830021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.190 [2024-07-15 16:06:04.834887] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.190 [2024-07-15 16:06:04.835251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.190 [2024-07-15 16:06:04.835292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.190 [2024-07-15 16:06:04.840127] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.190 [2024-07-15 16:06:04.840448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.190 [2024-07-15 16:06:04.840486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.190 [2024-07-15 16:06:04.845173] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.190 [2024-07-15 16:06:04.845461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.190 [2024-07-15 16:06:04.845499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.190 [2024-07-15 16:06:04.850307] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.190 [2024-07-15 16:06:04.850626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.190 [2024-07-15 16:06:04.850665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.190 [2024-07-15 16:06:04.855498] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.190 [2024-07-15 16:06:04.855782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.190 [2024-07-15 16:06:04.855820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.190 [2024-07-15 16:06:04.860473] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.190 [2024-07-15 16:06:04.860759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.190 [2024-07-15 16:06:04.860798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.190 [2024-07-15 16:06:04.865502] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.190 [2024-07-15 16:06:04.865787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.190 [2024-07-15 16:06:04.865825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.190 [2024-07-15 16:06:04.870518] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.190 [2024-07-15 16:06:04.870804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.190 [2024-07-15 16:06:04.870843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.190 [2024-07-15 16:06:04.875584] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.190 [2024-07-15 16:06:04.875871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.190 [2024-07-15 16:06:04.875910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.190 [2024-07-15 16:06:04.880535] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.190 [2024-07-15 16:06:04.880822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.190 [2024-07-15 16:06:04.880861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.190 [2024-07-15 16:06:04.885605] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.190 [2024-07-15 16:06:04.885911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.190 [2024-07-15 16:06:04.885948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.190 [2024-07-15 16:06:04.890826] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.190 [2024-07-15 16:06:04.891165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.190 [2024-07-15 16:06:04.891203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.190 [2024-07-15 16:06:04.896039] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.190 [2024-07-15 16:06:04.896327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.190 [2024-07-15 16:06:04.896365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.190 [2024-07-15 16:06:04.901154] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.190 [2024-07-15 16:06:04.901450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.190 [2024-07-15 16:06:04.901488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.190 [2024-07-15 16:06:04.906193] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.190 [2024-07-15 16:06:04.906491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.190 [2024-07-15 16:06:04.906530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.190 [2024-07-15 16:06:04.911220] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.190 [2024-07-15 16:06:04.911539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.190 [2024-07-15 16:06:04.911577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.449 [2024-07-15 16:06:04.916294] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.449 [2024-07-15 16:06:04.916599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.449 [2024-07-15 16:06:04.916638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.449 [2024-07-15 16:06:04.921356] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.449 [2024-07-15 16:06:04.921646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.450 [2024-07-15 16:06:04.921685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.450 [2024-07-15 16:06:04.926410] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.450 [2024-07-15 16:06:04.926713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.450 [2024-07-15 16:06:04.926753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.450 [2024-07-15 16:06:04.931436] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.450 [2024-07-15 16:06:04.931744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.450 [2024-07-15 16:06:04.931783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.450 [2024-07-15 16:06:04.936689] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.450 [2024-07-15 16:06:04.937032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.450 [2024-07-15 16:06:04.937094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.450 [2024-07-15 16:06:04.941745] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.450 [2024-07-15 16:06:04.942063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.450 [2024-07-15 16:06:04.942105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.450 [2024-07-15 16:06:04.946981] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.450 [2024-07-15 16:06:04.947331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.450 [2024-07-15 16:06:04.947381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.450 [2024-07-15 16:06:04.952118] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.450 [2024-07-15 16:06:04.952423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.450 [2024-07-15 16:06:04.952465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.450 [2024-07-15 16:06:04.957296] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.450 [2024-07-15 16:06:04.957615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.450 [2024-07-15 16:06:04.957653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.450 [2024-07-15 16:06:04.962360] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.450 [2024-07-15 16:06:04.962647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.450 [2024-07-15 16:06:04.962687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.450 [2024-07-15 16:06:04.967633] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.450 [2024-07-15 16:06:04.967966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.450 [2024-07-15 16:06:04.968017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.450 [2024-07-15 16:06:04.972732] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.450 [2024-07-15 16:06:04.973089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.450 [2024-07-15 16:06:04.973128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.450 [2024-07-15 16:06:04.977621] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.450 [2024-07-15 16:06:04.977981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.450 [2024-07-15 16:06:04.978024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.450 [2024-07-15 16:06:04.982828] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.450 [2024-07-15 16:06:04.983141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.450 [2024-07-15 16:06:04.983181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.450 [2024-07-15 16:06:04.987956] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.450 [2024-07-15 16:06:04.988312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.450 [2024-07-15 16:06:04.988351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.450 [2024-07-15 16:06:04.992725] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.450 [2024-07-15 16:06:04.993075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.450 [2024-07-15 16:06:04.993114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.450 [2024-07-15 16:06:04.997538] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.450 [2024-07-15 16:06:04.997874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.450 [2024-07-15 16:06:04.997926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.450 [2024-07-15 16:06:05.002570] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.450 [2024-07-15 16:06:05.002904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.450 [2024-07-15 16:06:05.002943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.450 [2024-07-15 16:06:05.007355] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.450 [2024-07-15 16:06:05.007691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.450 [2024-07-15 16:06:05.007747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.450 [2024-07-15 16:06:05.012097] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.450 [2024-07-15 16:06:05.012441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.450 [2024-07-15 16:06:05.012480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.450 [2024-07-15 16:06:05.017069] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.450 [2024-07-15 16:06:05.017400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.450 [2024-07-15 16:06:05.017434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.450 [2024-07-15 16:06:05.021966] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.450 [2024-07-15 16:06:05.022280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.450 [2024-07-15 16:06:05.022319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.450 [2024-07-15 16:06:05.026886] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.450 [2024-07-15 16:06:05.027244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.450 [2024-07-15 16:06:05.027277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.450 [2024-07-15 16:06:05.031839] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.450 [2024-07-15 16:06:05.032175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.450 [2024-07-15 16:06:05.032219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.450 [2024-07-15 16:06:05.037008] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.450 [2024-07-15 16:06:05.037302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.450 [2024-07-15 16:06:05.037345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.450 [2024-07-15 16:06:05.042123] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.450 [2024-07-15 16:06:05.042418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.450 [2024-07-15 16:06:05.042451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.450 [2024-07-15 16:06:05.047191] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.450 [2024-07-15 16:06:05.047485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.450 [2024-07-15 16:06:05.047518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.450 [2024-07-15 16:06:05.052262] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.450 [2024-07-15 16:06:05.052592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.450 [2024-07-15 16:06:05.052631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.450 [2024-07-15 16:06:05.057373] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.450 [2024-07-15 16:06:05.057662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.450 [2024-07-15 16:06:05.057697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.450 [2024-07-15 16:06:05.062423] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.450 [2024-07-15 16:06:05.062742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.450 [2024-07-15 16:06:05.062781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.450 [2024-07-15 16:06:05.067333] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.451 [2024-07-15 16:06:05.067678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.451 [2024-07-15 16:06:05.067713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.451 [2024-07-15 16:06:05.072198] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.451 [2024-07-15 16:06:05.072567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.451 [2024-07-15 16:06:05.072609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.451 [2024-07-15 16:06:05.077131] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.451 [2024-07-15 16:06:05.077468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.451 [2024-07-15 16:06:05.077506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.451 [2024-07-15 16:06:05.081868] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.451 [2024-07-15 16:06:05.082222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.451 [2024-07-15 16:06:05.082260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.451 [2024-07-15 16:06:05.086700] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.451 [2024-07-15 16:06:05.087072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.451 [2024-07-15 16:06:05.087131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.451 [2024-07-15 16:06:05.091554] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.451 [2024-07-15 16:06:05.091935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.451 [2024-07-15 16:06:05.091984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.451 [2024-07-15 16:06:05.096381] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.451 [2024-07-15 16:06:05.096714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.451 [2024-07-15 16:06:05.096753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.451 [2024-07-15 16:06:05.101126] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.451 [2024-07-15 16:06:05.101472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.451 [2024-07-15 16:06:05.101510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.451 [2024-07-15 16:06:05.105831] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.451 [2024-07-15 16:06:05.106221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.451 [2024-07-15 16:06:05.106259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.451 [2024-07-15 16:06:05.110523] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.451 [2024-07-15 16:06:05.110867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.451 [2024-07-15 16:06:05.110905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.451 [2024-07-15 16:06:05.115296] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.451 [2024-07-15 16:06:05.115631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.451 [2024-07-15 16:06:05.115669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.451 [2024-07-15 16:06:05.120092] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.451 [2024-07-15 16:06:05.120414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.451 [2024-07-15 16:06:05.120463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.451 [2024-07-15 16:06:05.124846] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.451 [2024-07-15 16:06:05.125205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.451 [2024-07-15 16:06:05.125243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.451 [2024-07-15 16:06:05.129693] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.451 [2024-07-15 16:06:05.130049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.451 [2024-07-15 16:06:05.130083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.451 [2024-07-15 16:06:05.134606] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.451 [2024-07-15 16:06:05.134947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.451 [2024-07-15 16:06:05.134992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.451 [2024-07-15 16:06:05.139551] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.451 [2024-07-15 16:06:05.139884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.451 [2024-07-15 16:06:05.139918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.451 [2024-07-15 16:06:05.144497] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.451 [2024-07-15 16:06:05.144829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.451 [2024-07-15 16:06:05.144867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.451 [2024-07-15 16:06:05.149540] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.451 [2024-07-15 16:06:05.149844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.451 [2024-07-15 16:06:05.149878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.451 [2024-07-15 16:06:05.154555] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.451 [2024-07-15 16:06:05.154916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.451 [2024-07-15 16:06:05.154950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.451 [2024-07-15 16:06:05.159564] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.451 [2024-07-15 16:06:05.159940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.451 [2024-07-15 16:06:05.159987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.451 [2024-07-15 16:06:05.164521] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.451 [2024-07-15 16:06:05.164885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.451 [2024-07-15 16:06:05.164921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.451 [2024-07-15 16:06:05.169543] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.451 [2024-07-15 16:06:05.169918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.451 [2024-07-15 16:06:05.169954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.451 [2024-07-15 16:06:05.174481] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.451 [2024-07-15 16:06:05.174794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.451 [2024-07-15 16:06:05.174843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.710 [2024-07-15 16:06:05.179468] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.711 [2024-07-15 16:06:05.179792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.711 [2024-07-15 16:06:05.179840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.711 [2024-07-15 16:06:05.184391] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.711 [2024-07-15 16:06:05.184727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.711 [2024-07-15 16:06:05.184763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.711 [2024-07-15 16:06:05.189294] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.711 [2024-07-15 16:06:05.189626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.711 [2024-07-15 16:06:05.189664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.711 [2024-07-15 16:06:05.194135] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.711 [2024-07-15 16:06:05.194461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.711 [2024-07-15 16:06:05.194499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.711 [2024-07-15 16:06:05.199076] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.711 [2024-07-15 16:06:05.199410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.711 [2024-07-15 16:06:05.199449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.711 [2024-07-15 16:06:05.204085] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.711 [2024-07-15 16:06:05.204392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.711 [2024-07-15 16:06:05.204429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.711 [2024-07-15 16:06:05.209094] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.711 [2024-07-15 16:06:05.209426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.711 [2024-07-15 16:06:05.209466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.711 [2024-07-15 16:06:05.214023] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.711 [2024-07-15 16:06:05.214310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.711 [2024-07-15 16:06:05.214348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.711 [2024-07-15 16:06:05.218942] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.711 [2024-07-15 16:06:05.219307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.711 [2024-07-15 16:06:05.219345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.711 [2024-07-15 16:06:05.223860] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.711 [2024-07-15 16:06:05.224229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.711 [2024-07-15 16:06:05.224279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.711 [2024-07-15 16:06:05.228754] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.711 [2024-07-15 16:06:05.229108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.711 [2024-07-15 16:06:05.229142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.711 [2024-07-15 16:06:05.233664] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.711 [2024-07-15 16:06:05.234010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.711 [2024-07-15 16:06:05.234043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.711 [2024-07-15 16:06:05.238632] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.711 [2024-07-15 16:06:05.238974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.711 [2024-07-15 16:06:05.238997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.711 [2024-07-15 16:06:05.243684] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.711 [2024-07-15 16:06:05.244028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.711 [2024-07-15 16:06:05.244074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.711 [2024-07-15 16:06:05.248563] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.711 [2024-07-15 16:06:05.248900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.711 [2024-07-15 16:06:05.248944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.711 [2024-07-15 16:06:05.253357] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.711 [2024-07-15 16:06:05.253699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.711 [2024-07-15 16:06:05.253736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.711 [2024-07-15 16:06:05.258297] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.711 [2024-07-15 16:06:05.258644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.711 [2024-07-15 16:06:05.258680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.711 [2024-07-15 16:06:05.262923] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.711 [2024-07-15 16:06:05.263286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.711 [2024-07-15 16:06:05.263342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.711 [2024-07-15 16:06:05.267649] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.711 [2024-07-15 16:06:05.267990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.711 [2024-07-15 16:06:05.268033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.711 [2024-07-15 16:06:05.272465] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.711 [2024-07-15 16:06:05.272807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.711 [2024-07-15 16:06:05.272843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.711 [2024-07-15 16:06:05.277276] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.711 [2024-07-15 16:06:05.277621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.711 [2024-07-15 16:06:05.277657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.711 [2024-07-15 16:06:05.282080] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.711 [2024-07-15 16:06:05.282434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.711 [2024-07-15 16:06:05.282468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.711 [2024-07-15 16:06:05.286968] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.711 [2024-07-15 16:06:05.287333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.711 [2024-07-15 16:06:05.287371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.711 [2024-07-15 16:06:05.291628] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.711 [2024-07-15 16:06:05.291971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.711 [2024-07-15 16:06:05.292014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.711 [2024-07-15 16:06:05.296400] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.711 [2024-07-15 16:06:05.296742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.711 [2024-07-15 16:06:05.296777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.711 [2024-07-15 16:06:05.301120] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.711 [2024-07-15 16:06:05.301462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.711 [2024-07-15 16:06:05.301495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.711 [2024-07-15 16:06:05.305995] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.711 [2024-07-15 16:06:05.306351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.711 [2024-07-15 16:06:05.306385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.711 [2024-07-15 16:06:05.310794] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.711 [2024-07-15 16:06:05.311136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.711 [2024-07-15 16:06:05.311186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.711 [2024-07-15 16:06:05.315568] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.711 [2024-07-15 16:06:05.315912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.711 [2024-07-15 16:06:05.315946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.712 [2024-07-15 16:06:05.320411] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.712 [2024-07-15 16:06:05.320753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.712 [2024-07-15 16:06:05.320791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.712 [2024-07-15 16:06:05.325113] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.712 [2024-07-15 16:06:05.325477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.712 [2024-07-15 16:06:05.325512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.712 [2024-07-15 16:06:05.329864] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.712 [2024-07-15 16:06:05.330234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.712 [2024-07-15 16:06:05.330274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.712 [2024-07-15 16:06:05.334657] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.712 [2024-07-15 16:06:05.334998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.712 [2024-07-15 16:06:05.335041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.712 [2024-07-15 16:06:05.339520] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.712 [2024-07-15 16:06:05.339843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.712 [2024-07-15 16:06:05.339898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.712 [2024-07-15 16:06:05.344462] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.712 [2024-07-15 16:06:05.344827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.712 [2024-07-15 16:06:05.344863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.712 [2024-07-15 16:06:05.349428] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.712 [2024-07-15 16:06:05.349756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.712 [2024-07-15 16:06:05.349793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.712 [2024-07-15 16:06:05.354553] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.712 [2024-07-15 16:06:05.354889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.712 [2024-07-15 16:06:05.354934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.712 [2024-07-15 16:06:05.359430] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.712 [2024-07-15 16:06:05.359777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.712 [2024-07-15 16:06:05.359811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.712 [2024-07-15 16:06:05.364265] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.712 [2024-07-15 16:06:05.364608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.712 [2024-07-15 16:06:05.364645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.712 [2024-07-15 16:06:05.369073] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.712 [2024-07-15 16:06:05.369412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.712 [2024-07-15 16:06:05.369447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.712 [2024-07-15 16:06:05.373741] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.712 [2024-07-15 16:06:05.374106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.712 [2024-07-15 16:06:05.374142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.712 [2024-07-15 16:06:05.378565] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.712 [2024-07-15 16:06:05.378898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.712 [2024-07-15 16:06:05.378934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.712 [2024-07-15 16:06:05.383404] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.712 [2024-07-15 16:06:05.383737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.712 [2024-07-15 16:06:05.383773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.712 [2024-07-15 16:06:05.388130] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.712 [2024-07-15 16:06:05.388466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.712 [2024-07-15 16:06:05.388500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.712 [2024-07-15 16:06:05.392872] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.712 [2024-07-15 16:06:05.393218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.712 [2024-07-15 16:06:05.393252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.712 [2024-07-15 16:06:05.397633] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.712 [2024-07-15 16:06:05.398003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.712 [2024-07-15 16:06:05.398035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.712 [2024-07-15 16:06:05.402446] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.712 [2024-07-15 16:06:05.402779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.712 [2024-07-15 16:06:05.402813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.712 [2024-07-15 16:06:05.407357] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.712 [2024-07-15 16:06:05.407727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.712 [2024-07-15 16:06:05.407761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.712 [2024-07-15 16:06:05.412321] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.712 [2024-07-15 16:06:05.412684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.712 [2024-07-15 16:06:05.412718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.712 [2024-07-15 16:06:05.417284] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.712 [2024-07-15 16:06:05.417635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.712 [2024-07-15 16:06:05.417669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.712 [2024-07-15 16:06:05.422106] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.712 [2024-07-15 16:06:05.422483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.712 [2024-07-15 16:06:05.422521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.712 [2024-07-15 16:06:05.427106] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.712 [2024-07-15 16:06:05.427441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.712 [2024-07-15 16:06:05.427474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.712 [2024-07-15 16:06:05.431880] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.712 [2024-07-15 16:06:05.432245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.712 [2024-07-15 16:06:05.432284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.712 [2024-07-15 16:06:05.436631] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.712 [2024-07-15 16:06:05.436925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.712 [2024-07-15 16:06:05.436998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.971 [2024-07-15 16:06:05.441521] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.971 [2024-07-15 16:06:05.441868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.971 [2024-07-15 16:06:05.441914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.971 [2024-07-15 16:06:05.446382] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.971 [2024-07-15 16:06:05.446713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.971 [2024-07-15 16:06:05.446756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.971 [2024-07-15 16:06:05.451331] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.971 [2024-07-15 16:06:05.451688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.971 [2024-07-15 16:06:05.451737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.971 [2024-07-15 16:06:05.456206] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.971 [2024-07-15 16:06:05.456588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.971 [2024-07-15 16:06:05.456626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.971 [2024-07-15 16:06:05.461138] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.971 [2024-07-15 16:06:05.461468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.971 [2024-07-15 16:06:05.461507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.972 [2024-07-15 16:06:05.465940] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.972 [2024-07-15 16:06:05.466282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.972 [2024-07-15 16:06:05.466320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.972 [2024-07-15 16:06:05.470842] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.972 [2024-07-15 16:06:05.471191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.972 [2024-07-15 16:06:05.471232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.972 [2024-07-15 16:06:05.475680] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.972 [2024-07-15 16:06:05.476027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.972 [2024-07-15 16:06:05.476086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.972 [2024-07-15 16:06:05.480516] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.972 [2024-07-15 16:06:05.480866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.972 [2024-07-15 16:06:05.480904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.972 [2024-07-15 16:06:05.485426] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.972 [2024-07-15 16:06:05.485748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.972 [2024-07-15 16:06:05.485787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.972 [2024-07-15 16:06:05.490265] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.972 [2024-07-15 16:06:05.490612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.972 [2024-07-15 16:06:05.490651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.972 [2024-07-15 16:06:05.495034] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.972 [2024-07-15 16:06:05.495368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.972 [2024-07-15 16:06:05.495407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.972 [2024-07-15 16:06:05.500158] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.972 [2024-07-15 16:06:05.500484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.972 [2024-07-15 16:06:05.500522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.972 [2024-07-15 16:06:05.505270] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.972 [2024-07-15 16:06:05.505573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.972 [2024-07-15 16:06:05.505607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.972 [2024-07-15 16:06:05.510420] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.972 [2024-07-15 16:06:05.510706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.972 [2024-07-15 16:06:05.510741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.972 [2024-07-15 16:06:05.515749] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.972 [2024-07-15 16:06:05.516100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.972 [2024-07-15 16:06:05.516131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.972 [2024-07-15 16:06:05.520989] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.972 [2024-07-15 16:06:05.521328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.972 [2024-07-15 16:06:05.521361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.972 [2024-07-15 16:06:05.526397] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.972 [2024-07-15 16:06:05.526684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.972 [2024-07-15 16:06:05.526733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.972 [2024-07-15 16:06:05.531836] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.972 [2024-07-15 16:06:05.532185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.972 [2024-07-15 16:06:05.532222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.972 [2024-07-15 16:06:05.537130] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.972 [2024-07-15 16:06:05.537485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.972 [2024-07-15 16:06:05.537524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.972 [2024-07-15 16:06:05.542390] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.972 [2024-07-15 16:06:05.542677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.972 [2024-07-15 16:06:05.542715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.972 [2024-07-15 16:06:05.547513] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.972 [2024-07-15 16:06:05.547850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.972 [2024-07-15 16:06:05.547897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.972 [2024-07-15 16:06:05.552535] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.972 [2024-07-15 16:06:05.552874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.972 [2024-07-15 16:06:05.552913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.972 [2024-07-15 16:06:05.557414] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.972 [2024-07-15 16:06:05.557752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.972 [2024-07-15 16:06:05.557791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.972 [2024-07-15 16:06:05.562359] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.972 [2024-07-15 16:06:05.562645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.972 [2024-07-15 16:06:05.562685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.972 [2024-07-15 16:06:05.567234] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.972 [2024-07-15 16:06:05.567581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.972 [2024-07-15 16:06:05.567619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.972 [2024-07-15 16:06:05.572151] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.972 [2024-07-15 16:06:05.572483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.972 [2024-07-15 16:06:05.572521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.972 [2024-07-15 16:06:05.577016] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.972 [2024-07-15 16:06:05.577335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.972 [2024-07-15 16:06:05.577382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.972 [2024-07-15 16:06:05.582106] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.972 [2024-07-15 16:06:05.582445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.972 [2024-07-15 16:06:05.582484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.972 [2024-07-15 16:06:05.587123] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.972 [2024-07-15 16:06:05.587439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.972 [2024-07-15 16:06:05.587477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.972 [2024-07-15 16:06:05.592040] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.972 [2024-07-15 16:06:05.592373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.972 [2024-07-15 16:06:05.592411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.972 [2024-07-15 16:06:05.596820] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.972 [2024-07-15 16:06:05.597157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.972 [2024-07-15 16:06:05.597206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.972 [2024-07-15 16:06:05.601747] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.972 [2024-07-15 16:06:05.602102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.972 [2024-07-15 16:06:05.602135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.972 [2024-07-15 16:06:05.606695] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.972 [2024-07-15 16:06:05.607037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.973 [2024-07-15 16:06:05.607082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.973 [2024-07-15 16:06:05.611831] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.973 [2024-07-15 16:06:05.612151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.973 [2024-07-15 16:06:05.612214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.973 [2024-07-15 16:06:05.616898] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.973 [2024-07-15 16:06:05.617233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.973 [2024-07-15 16:06:05.617271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.973 [2024-07-15 16:06:05.621733] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.973 [2024-07-15 16:06:05.622069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.973 [2024-07-15 16:06:05.622099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.973 [2024-07-15 16:06:05.626539] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.973 [2024-07-15 16:06:05.626847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.973 [2024-07-15 16:06:05.626880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.973 [2024-07-15 16:06:05.631366] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.973 [2024-07-15 16:06:05.631676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.973 [2024-07-15 16:06:05.631710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.973 [2024-07-15 16:06:05.636238] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.973 [2024-07-15 16:06:05.636577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.973 [2024-07-15 16:06:05.636613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.973 [2024-07-15 16:06:05.641157] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.973 [2024-07-15 16:06:05.641482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.973 [2024-07-15 16:06:05.641520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.973 [2024-07-15 16:06:05.646009] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.973 [2024-07-15 16:06:05.646298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.973 [2024-07-15 16:06:05.646338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.973 [2024-07-15 16:06:05.650828] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.973 [2024-07-15 16:06:05.651188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.973 [2024-07-15 16:06:05.651227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.973 [2024-07-15 16:06:05.655880] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.973 [2024-07-15 16:06:05.656219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.973 [2024-07-15 16:06:05.656255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.973 [2024-07-15 16:06:05.661047] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.973 [2024-07-15 16:06:05.661368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.973 [2024-07-15 16:06:05.661406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.973 [2024-07-15 16:06:05.666185] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.973 [2024-07-15 16:06:05.666493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.973 [2024-07-15 16:06:05.666533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.973 [2024-07-15 16:06:05.671357] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.973 [2024-07-15 16:06:05.671695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.973 [2024-07-15 16:06:05.671734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.973 [2024-07-15 16:06:05.676486] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.973 [2024-07-15 16:06:05.676831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.973 [2024-07-15 16:06:05.676870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:11.973 [2024-07-15 16:06:05.681584] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.973 [2024-07-15 16:06:05.681931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.973 [2024-07-15 16:06:05.681985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:11.973 [2024-07-15 16:06:05.686642] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.973 [2024-07-15 16:06:05.686984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.973 [2024-07-15 16:06:05.687048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.973 [2024-07-15 16:06:05.691976] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.973 [2024-07-15 16:06:05.692331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.973 [2024-07-15 16:06:05.692370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.973 [2024-07-15 16:06:05.697010] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:11.973 [2024-07-15 16:06:05.697336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.973 [2024-07-15 16:06:05.697372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.232 [2024-07-15 16:06:05.702042] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.232 [2024-07-15 16:06:05.702358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.232 [2024-07-15 16:06:05.702395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.232 [2024-07-15 16:06:05.706979] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.232 [2024-07-15 16:06:05.707340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.232 [2024-07-15 16:06:05.707378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.232 [2024-07-15 16:06:05.712035] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.232 [2024-07-15 16:06:05.712357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.232 [2024-07-15 16:06:05.712396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.232 [2024-07-15 16:06:05.716957] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.232 [2024-07-15 16:06:05.717305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.232 [2024-07-15 16:06:05.717344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.232 [2024-07-15 16:06:05.721757] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.232 [2024-07-15 16:06:05.722132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.232 [2024-07-15 16:06:05.722170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.232 [2024-07-15 16:06:05.726725] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.232 [2024-07-15 16:06:05.727075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.232 [2024-07-15 16:06:05.727111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.232 [2024-07-15 16:06:05.731689] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.232 [2024-07-15 16:06:05.732027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.232 [2024-07-15 16:06:05.732079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.232 [2024-07-15 16:06:05.736917] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.232 [2024-07-15 16:06:05.737259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.232 [2024-07-15 16:06:05.737298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.232 [2024-07-15 16:06:05.742131] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.232 [2024-07-15 16:06:05.742419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.232 [2024-07-15 16:06:05.742467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.232 [2024-07-15 16:06:05.747200] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.232 [2024-07-15 16:06:05.747484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.232 [2024-07-15 16:06:05.747522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.232 [2024-07-15 16:06:05.752447] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.232 [2024-07-15 16:06:05.752739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.232 [2024-07-15 16:06:05.752782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.232 [2024-07-15 16:06:05.757557] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.232 [2024-07-15 16:06:05.757886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.232 [2024-07-15 16:06:05.757931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.232 [2024-07-15 16:06:05.762685] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.232 [2024-07-15 16:06:05.763014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.232 [2024-07-15 16:06:05.763057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.232 [2024-07-15 16:06:05.767854] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.232 [2024-07-15 16:06:05.768195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.232 [2024-07-15 16:06:05.768233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.232 [2024-07-15 16:06:05.773075] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.232 [2024-07-15 16:06:05.773357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.232 [2024-07-15 16:06:05.773395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.232 [2024-07-15 16:06:05.778325] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.232 [2024-07-15 16:06:05.778625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.232 [2024-07-15 16:06:05.778688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.232 [2024-07-15 16:06:05.783506] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.232 [2024-07-15 16:06:05.783838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.232 [2024-07-15 16:06:05.783872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.232 [2024-07-15 16:06:05.788794] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.232 [2024-07-15 16:06:05.789134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.232 [2024-07-15 16:06:05.789167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.232 [2024-07-15 16:06:05.794297] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.232 [2024-07-15 16:06:05.794673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.232 [2024-07-15 16:06:05.794716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.232 [2024-07-15 16:06:05.799532] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.233 [2024-07-15 16:06:05.799845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.233 [2024-07-15 16:06:05.799884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.233 [2024-07-15 16:06:05.804470] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.233 [2024-07-15 16:06:05.804793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.233 [2024-07-15 16:06:05.804831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.233 [2024-07-15 16:06:05.809513] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.233 [2024-07-15 16:06:05.809862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.233 [2024-07-15 16:06:05.809910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.233 [2024-07-15 16:06:05.814443] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.233 [2024-07-15 16:06:05.814761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.233 [2024-07-15 16:06:05.814800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.233 [2024-07-15 16:06:05.819596] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.233 [2024-07-15 16:06:05.819917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.233 [2024-07-15 16:06:05.819966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.233 [2024-07-15 16:06:05.824478] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.233 [2024-07-15 16:06:05.824799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.233 [2024-07-15 16:06:05.824837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.233 [2024-07-15 16:06:05.829490] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.233 [2024-07-15 16:06:05.829820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.233 [2024-07-15 16:06:05.829859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.233 [2024-07-15 16:06:05.834554] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.233 [2024-07-15 16:06:05.834875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.233 [2024-07-15 16:06:05.834913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.233 [2024-07-15 16:06:05.839496] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.233 [2024-07-15 16:06:05.839821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.233 [2024-07-15 16:06:05.839860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.233 [2024-07-15 16:06:05.844441] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.233 [2024-07-15 16:06:05.844775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.233 [2024-07-15 16:06:05.844814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.233 [2024-07-15 16:06:05.849416] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.233 [2024-07-15 16:06:05.849740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.233 [2024-07-15 16:06:05.849779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.233 [2024-07-15 16:06:05.854512] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.233 [2024-07-15 16:06:05.854852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.233 [2024-07-15 16:06:05.854891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.233 [2024-07-15 16:06:05.859679] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.233 [2024-07-15 16:06:05.860031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.233 [2024-07-15 16:06:05.860081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.233 [2024-07-15 16:06:05.864765] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.233 [2024-07-15 16:06:05.865121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.233 [2024-07-15 16:06:05.865154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.233 [2024-07-15 16:06:05.869775] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.233 [2024-07-15 16:06:05.870141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.233 [2024-07-15 16:06:05.870176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.233 [2024-07-15 16:06:05.875148] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.233 [2024-07-15 16:06:05.875436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.233 [2024-07-15 16:06:05.875474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.233 [2024-07-15 16:06:05.880070] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.233 [2024-07-15 16:06:05.880396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.233 [2024-07-15 16:06:05.880435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.233 [2024-07-15 16:06:05.885185] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.233 [2024-07-15 16:06:05.885484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.233 [2024-07-15 16:06:05.885522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.233 [2024-07-15 16:06:05.890095] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.233 [2024-07-15 16:06:05.890434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.233 [2024-07-15 16:06:05.890472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.233 [2024-07-15 16:06:05.895216] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.233 [2024-07-15 16:06:05.895573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.233 [2024-07-15 16:06:05.895609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.233 [2024-07-15 16:06:05.900176] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.233 [2024-07-15 16:06:05.900490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.233 [2024-07-15 16:06:05.900527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.233 [2024-07-15 16:06:05.905143] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.233 [2024-07-15 16:06:05.905452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.233 [2024-07-15 16:06:05.905491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.233 [2024-07-15 16:06:05.910080] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.233 [2024-07-15 16:06:05.910436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.233 [2024-07-15 16:06:05.910475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.233 [2024-07-15 16:06:05.915118] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.233 [2024-07-15 16:06:05.915476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.233 [2024-07-15 16:06:05.915512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.233 [2024-07-15 16:06:05.920359] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.233 [2024-07-15 16:06:05.920730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.233 [2024-07-15 16:06:05.920769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.233 [2024-07-15 16:06:05.925415] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.233 [2024-07-15 16:06:05.925743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.233 [2024-07-15 16:06:05.925782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.233 [2024-07-15 16:06:05.930668] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.233 [2024-07-15 16:06:05.930990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.233 [2024-07-15 16:06:05.931046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.233 [2024-07-15 16:06:05.935826] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.233 [2024-07-15 16:06:05.936142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.234 [2024-07-15 16:06:05.936174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.234 [2024-07-15 16:06:05.941025] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.234 [2024-07-15 16:06:05.941318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.234 [2024-07-15 16:06:05.941353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.234 [2024-07-15 16:06:05.946203] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.234 [2024-07-15 16:06:05.946521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.234 [2024-07-15 16:06:05.946558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.234 [2024-07-15 16:06:05.951218] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.234 [2024-07-15 16:06:05.951505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.234 [2024-07-15 16:06:05.951544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.234 [2024-07-15 16:06:05.956379] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.234 [2024-07-15 16:06:05.956696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.234 [2024-07-15 16:06:05.956735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.493 [2024-07-15 16:06:05.961516] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.493 [2024-07-15 16:06:05.961804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.493 [2024-07-15 16:06:05.961842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.493 [2024-07-15 16:06:05.966672] tcp.c:2123:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b08e10) with pdu=0x2000190fef90 00:20:12.493 [2024-07-15 16:06:05.966957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.493 [2024-07-15 16:06:05.967006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.493 00:20:12.493 Latency(us) 00:20:12.493 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.493 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:12.493 nvme0n1 : 2.00 6192.68 774.09 0.00 0.00 2577.62 1586.27 5510.98 00:20:12.493 =================================================================================================================== 00:20:12.493 Total : 6192.68 774.09 0.00 0.00 2577.62 1586.27 5510.98 00:20:12.493 0 00:20:12.493 16:06:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:12.493 16:06:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:12.493 16:06:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:12.493 16:06:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:12.493 | .driver_specific 00:20:12.493 | .nvme_error 00:20:12.493 | .status_code 00:20:12.493 | .command_transient_transport_error' 00:20:12.752 16:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 399 > 0 )) 00:20:12.752 16:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94329 00:20:12.752 16:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 94329 ']' 00:20:12.752 16:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 94329 00:20:12.752 16:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:12.752 16:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:12.752 16:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94329 00:20:12.752 16:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:12.752 16:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:12.752 killing process with pid 94329 00:20:12.752 16:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94329' 00:20:12.752 16:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 94329 00:20:12.752 Received shutdown signal, test time was about 2.000000 seconds 00:20:12.752 00:20:12.752 Latency(us) 00:20:12.752 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.752 =================================================================================================================== 00:20:12.752 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:12.752 16:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 94329 00:20:13.009 16:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 94019 00:20:13.009 16:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 94019 ']' 00:20:13.009 16:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 94019 00:20:13.009 16:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:13.009 16:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:13.009 16:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94019 00:20:13.009 16:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:13.009 16:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:13.009 killing process with pid 94019 00:20:13.009 16:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94019' 00:20:13.009 16:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 94019 00:20:13.009 16:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 94019 00:20:13.266 00:20:13.266 real 0m18.727s 00:20:13.266 user 0m35.974s 00:20:13.266 sys 0m4.648s 00:20:13.266 16:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:13.266 16:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:13.266 ************************************ 00:20:13.266 END TEST nvmf_digest_error 00:20:13.266 ************************************ 00:20:13.266 16:06:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:20:13.266 16:06:06 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:20:13.266 16:06:06 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:20:13.266 16:06:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:13.266 16:06:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:20:13.266 16:06:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:13.266 16:06:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:20:13.266 16:06:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:13.266 16:06:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:13.266 rmmod nvme_tcp 00:20:13.266 rmmod nvme_fabrics 00:20:13.266 rmmod nvme_keyring 00:20:13.266 16:06:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:13.266 16:06:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:20:13.266 16:06:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:20:13.266 16:06:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 94019 ']' 00:20:13.266 16:06:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 94019 00:20:13.266 16:06:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 94019 ']' 00:20:13.266 16:06:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 94019 00:20:13.266 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (94019) - No such process 00:20:13.266 Process with pid 94019 is not found 00:20:13.266 16:06:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 94019 is not found' 00:20:13.266 16:06:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:13.266 16:06:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:13.266 16:06:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:13.266 16:06:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:13.266 16:06:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:13.266 16:06:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.266 16:06:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:13.266 16:06:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.266 16:06:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:13.524 00:20:13.524 real 0m38.404s 00:20:13.524 user 1m12.569s 00:20:13.524 sys 0m9.578s 00:20:13.524 16:06:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:13.524 ************************************ 00:20:13.524 END TEST nvmf_digest 00:20:13.524 ************************************ 00:20:13.524 16:06:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:13.524 16:06:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:13.524 16:06:07 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 1 -eq 1 ]] 00:20:13.524 16:06:07 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ tcp == \t\c\p ]] 00:20:13.524 16:06:07 nvmf_tcp -- nvmf/nvmf.sh@113 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:20:13.524 16:06:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:13.524 16:06:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:13.524 16:06:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:13.524 ************************************ 00:20:13.524 START TEST nvmf_mdns_discovery 00:20:13.524 ************************************ 00:20:13.524 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:20:13.524 * Looking for test storage... 00:20:13.524 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:13.524 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:13.524 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:20:13.524 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:13.524 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:13.524 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:13.524 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:13.524 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:13.524 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:13.524 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:13.524 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:13.524 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:13.524 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:13.524 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:20:13.524 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:20:13.524 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:13.524 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:13.524 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:13.524 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:13.524 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:13.524 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:13.524 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:13.524 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:13.524 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.524 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.524 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.524 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@47 -- # : 0 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:13.525 Cannot find device "nvmf_tgt_br" 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # true 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:13.525 Cannot find device "nvmf_tgt_br2" 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # true 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:13.525 Cannot find device "nvmf_tgt_br" 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # true 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:13.525 Cannot find device "nvmf_tgt_br2" 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # true 00:20:13.525 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:13.782 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:13.782 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:13.782 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:13.782 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:20:13.782 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:13.782 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:13.782 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:20:13.782 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:13.782 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:13.782 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:13.782 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:13.782 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:13.782 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:13.782 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:13.782 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:13.782 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:13.782 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:13.782 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:13.782 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:13.782 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:13.782 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:13.782 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:13.782 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:13.782 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:13.782 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:13.782 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:13.782 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:13.782 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:13.782 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:13.782 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:13.782 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:13.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:13.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:20:13.782 00:20:13.782 --- 10.0.0.2 ping statistics --- 00:20:13.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.782 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:20:13.782 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:13.782 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:13.782 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:20:13.782 00:20:13.782 --- 10.0.0.3 ping statistics --- 00:20:13.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.782 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:20:13.782 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:13.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:13.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:20:13.783 00:20:13.783 --- 10.0.0.1 ping statistics --- 00:20:13.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.783 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:20:13.783 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:13.783 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@433 -- # return 0 00:20:13.783 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:13.783 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:13.783 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:13.783 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:13.783 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:13.783 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:13.783 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:14.039 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:14.039 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:14.039 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:14.040 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:14.040 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@481 -- # nvmfpid=94625 00:20:14.040 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # waitforlisten 94625 00:20:14.040 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 94625 ']' 00:20:14.040 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:14.040 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.040 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:14.040 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.040 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:14.040 16:06:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:14.040 [2024-07-15 16:06:07.584341] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:20:14.040 [2024-07-15 16:06:07.584437] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:14.040 [2024-07-15 16:06:07.724690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.297 [2024-07-15 16:06:07.846432] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.297 [2024-07-15 16:06:07.846515] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.297 [2024-07-15 16:06:07.846530] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:14.297 [2024-07-15 16:06:07.846541] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:14.297 [2024-07-15 16:06:07.846550] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.297 [2024-07-15 16:06:07.846591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.863 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:14.863 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:20:14.863 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:14.863 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:14.863 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:15.121 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:15.121 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:20:15.121 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.121 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:15.121 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.121 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:20:15.121 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.121 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:15.121 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.121 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:15.121 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.121 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:15.121 [2024-07-15 16:06:08.726638] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:15.121 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.121 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:20:15.121 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.121 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:15.121 [2024-07-15 16:06:08.734736] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:20:15.121 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.121 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:20:15.121 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.121 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:15.121 null0 00:20:15.121 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.121 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:20:15.121 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.121 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:15.121 null1 00:20:15.121 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.121 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:20:15.121 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.121 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:15.121 null2 00:20:15.121 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.121 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:20:15.121 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.121 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:15.121 null3 00:20:15.121 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.121 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:20:15.121 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.121 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:15.121 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.121 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=94675 00:20:15.121 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:20:15.122 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 94675 /tmp/host.sock 00:20:15.122 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 94675 ']' 00:20:15.122 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:20:15.122 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:15.122 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:15.122 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:15.122 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:15.122 16:06:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:15.379 [2024-07-15 16:06:08.852331] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:20:15.379 [2024-07-15 16:06:08.852450] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94675 ] 00:20:15.379 [2024-07-15 16:06:08.989697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.637 [2024-07-15 16:06:09.117244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.202 16:06:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:16.202 16:06:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:20:16.202 16:06:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:20:16.202 16:06:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:20:16.202 16:06:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:20:16.202 16:06:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=94707 00:20:16.202 16:06:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:20:16.202 16:06:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:20:16.202 16:06:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:20:16.202 Process 978 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:20:16.202 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:20:16.202 Successfully dropped root privileges. 00:20:16.202 avahi-daemon 0.8 starting up. 00:20:16.202 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:20:16.202 Successfully called chroot(). 00:20:16.202 Successfully dropped remaining capabilities. 00:20:16.202 No service file found in /etc/avahi/services. 00:20:16.202 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:20:16.202 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:20:16.202 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:20:16.202 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:20:16.202 Network interface enumeration completed. 00:20:16.202 Registering new address record for fe80::587a:63ff:fef9:f6a7 on nvmf_tgt_if2.*. 00:20:16.202 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:20:16.202 Registering new address record for fe80::e073:5fff:fecc:6446 on nvmf_tgt_if.*. 00:20:17.134 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:20:17.134 Server startup complete. Host name is fedora38-cloud-1716830599-074-updated-1705279005.local. Local service cookie is 1338394814. 00:20:17.392 16:06:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:20:17.392 16:06:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.392 16:06:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.392 16:06:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.392 16:06:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:17.392 16:06:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.392 16:06:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.392 16:06:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.392 16:06:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # notify_id=0 00:20:17.392 16:06:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # get_subsystem_names 00:20:17.392 16:06:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:17.392 16:06:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:17.392 16:06:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.392 16:06:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:17.393 16:06:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:17.393 16:06:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.393 16:06:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.393 16:06:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:20:17.393 16:06:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # get_bdev_list 00:20:17.393 16:06:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:17.393 16:06:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.393 16:06:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.393 16:06:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:17.393 16:06:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:17.393 16:06:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:17.393 16:06:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.393 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # [[ '' == '' ]] 00:20:17.393 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:20:17.393 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.393 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.393 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.393 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # get_subsystem_names 00:20:17.393 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:17.393 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:17.393 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.393 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.393 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:17.393 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:17.393 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.393 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:20:17.393 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # get_bdev_list 00:20:17.393 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:17.393 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:17.393 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:17.393 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:17.393 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.393 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.393 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ '' == '' ]] 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # get_subsystem_names 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.650 [2024-07-15 16:06:11.199067] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # get_bdev_list 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # [[ '' == '' ]] 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.650 [2024-07-15 16:06:11.263447] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@109 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@113 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.650 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.651 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@119 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:20:17.651 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.651 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.651 [2024-07-15 16:06:11.303438] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:20:17.651 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.651 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:20:17.651 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.651 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.651 [2024-07-15 16:06:11.311378] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:17.651 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.651 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # rpc_cmd nvmf_publish_mdns_prr 00:20:17.651 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.651 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.651 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.651 16:06:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # sleep 5 00:20:18.580 [2024-07-15 16:06:12.099067] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:20:19.142 [2024-07-15 16:06:12.699104] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:20:19.142 [2024-07-15 16:06:12.699150] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:20:19.142 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:19.142 cookie is 0 00:20:19.142 is_local: 1 00:20:19.142 our_own: 0 00:20:19.142 wide_area: 0 00:20:19.142 multicast: 1 00:20:19.142 cached: 1 00:20:19.142 [2024-07-15 16:06:12.799085] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:20:19.142 [2024-07-15 16:06:12.799129] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:20:19.142 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:19.142 cookie is 0 00:20:19.142 is_local: 1 00:20:19.142 our_own: 0 00:20:19.142 wide_area: 0 00:20:19.142 multicast: 1 00:20:19.142 cached: 1 00:20:19.142 [2024-07-15 16:06:12.799144] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:20:19.399 [2024-07-15 16:06:12.899082] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:20:19.399 [2024-07-15 16:06:12.899120] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:20:19.399 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:19.399 cookie is 0 00:20:19.399 is_local: 1 00:20:19.399 our_own: 0 00:20:19.399 wide_area: 0 00:20:19.399 multicast: 1 00:20:19.399 cached: 1 00:20:19.399 [2024-07-15 16:06:12.999090] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:20:19.399 [2024-07-15 16:06:12.999181] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:20:19.399 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:19.399 cookie is 0 00:20:19.399 is_local: 1 00:20:19.399 our_own: 0 00:20:19.399 wide_area: 0 00:20:19.399 multicast: 1 00:20:19.399 cached: 1 00:20:19.399 [2024-07-15 16:06:12.999198] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:20:20.335 [2024-07-15 16:06:13.711147] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:20.335 [2024-07-15 16:06:13.711185] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:20.335 [2024-07-15 16:06:13.711202] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:20.335 [2024-07-15 16:06:13.797307] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:20:20.335 [2024-07-15 16:06:13.854692] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:20:20.335 [2024-07-15 16:06:13.854722] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:20:20.335 [2024-07-15 16:06:13.911195] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:20.335 [2024-07-15 16:06:13.911234] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:20.335 [2024-07-15 16:06:13.911253] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:20.335 [2024-07-15 16:06:13.997364] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:20:20.335 [2024-07-15 16:06:14.053690] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:20:20.335 [2024-07-15 16:06:14.053749] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:22.855 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.113 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:20:23.113 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:20:23.113 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:20:23.113 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:23.113 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.113 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:23.113 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:23.113 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:23.113 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.113 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:20:23.113 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # get_notification_count 00:20:23.113 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:20:23.113 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.113 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:23.113 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:20:23.113 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.113 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:20:23.113 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=2 00:20:23.113 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:20:23.113 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:20:23.113 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.113 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:23.113 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.113 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:20:23.113 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.113 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:23.113 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.113 16:06:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@139 -- # sleep 1 00:20:24.045 16:06:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:20:24.045 16:06:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:24.045 16:06:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.045 16:06:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:24.045 16:06:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:24.045 16:06:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:24.045 16:06:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:24.302 16:06:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.302 16:06:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:24.302 16:06:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@142 -- # get_notification_count 00:20:24.302 16:06:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:24.302 16:06:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:20:24.302 16:06:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.302 16:06:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:24.302 16:06:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.302 16:06:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:20:24.302 16:06:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:20:24.302 16:06:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:20:24.302 16:06:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:20:24.302 16:06:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.302 16:06:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:24.302 [2024-07-15 16:06:17.866250] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:24.302 [2024-07-15 16:06:17.867062] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:24.302 [2024-07-15 16:06:17.867095] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:24.302 [2024-07-15 16:06:17.867130] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:24.302 [2024-07-15 16:06:17.867145] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:24.302 16:06:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.302 16:06:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:20:24.302 16:06:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.302 16:06:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:24.302 [2024-07-15 16:06:17.874155] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:24.302 [2024-07-15 16:06:17.875057] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:24.302 [2024-07-15 16:06:17.875114] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:24.302 16:06:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.302 16:06:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 1 00:20:24.302 [2024-07-15 16:06:18.007174] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:20:24.302 [2024-07-15 16:06:18.007390] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:20:24.560 [2024-07-15 16:06:18.065498] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:20:24.560 [2024-07-15 16:06:18.065545] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:20:24.560 [2024-07-15 16:06:18.065553] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:20:24.560 [2024-07-15 16:06:18.065574] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:24.560 [2024-07-15 16:06:18.065619] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:20:24.560 [2024-07-15 16:06:18.065629] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:24.560 [2024-07-15 16:06:18.065634] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:24.560 [2024-07-15 16:06:18.065649] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:24.560 [2024-07-15 16:06:18.111302] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:20:24.560 [2024-07-15 16:06:18.111342] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:20:24.560 [2024-07-15 16:06:18.111388] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:24.560 [2024-07-15 16:06:18.111397] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:25.494 16:06:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:20:25.494 16:06:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:25.494 16:06:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.494 16:06:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:25.494 16:06:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:25.494 16:06:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:25.494 16:06:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:25.494 16:06:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.494 16:06:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:20:25.494 16:06:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:20:25.494 16:06:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:25.494 16:06:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:25.494 16:06:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:25.494 16:06:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:25.494 16:06:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.494 16:06:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:25.494 16:06:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.494 16:06:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:25.494 16:06:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:20:25.494 16:06:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:25.494 16:06:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:20:25.494 16:06:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:25.494 16:06:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.494 16:06:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:25.494 16:06:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:25.494 16:06:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.494 16:06:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:20:25.494 16:06:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:20:25.494 16:06:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:20:25.494 16:06:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:25.494 16:06:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.494 16:06:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:25.494 16:06:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:25.494 16:06:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:25.494 16:06:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.494 16:06:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:20:25.494 16:06:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@155 -- # get_notification_count 00:20:25.494 16:06:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:20:25.494 16:06:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.494 16:06:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:20:25.494 16:06:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:25.494 16:06:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.494 16:06:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:20:25.494 16:06:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:20:25.494 16:06:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:20:25.494 16:06:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:25.494 16:06:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.494 16:06:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:25.494 [2024-07-15 16:06:19.196141] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:25.494 [2024-07-15 16:06:19.196182] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:25.494 [2024-07-15 16:06:19.196218] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:25.494 [2024-07-15 16:06:19.196233] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:25.494 16:06:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.494 16:06:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:20:25.494 16:06:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.494 16:06:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:25.494 [2024-07-15 16:06:19.203156] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:25.494 [2024-07-15 16:06:19.203211] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:25.494 [2024-07-15 16:06:19.203875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:25.494 [2024-07-15 16:06:19.203907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.494 [2024-07-15 16:06:19.203937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:25.494 [2024-07-15 16:06:19.203947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.494 [2024-07-15 16:06:19.203958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:25.494 [2024-07-15 16:06:19.203967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.494 [2024-07-15 16:06:19.204011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:25.494 [2024-07-15 16:06:19.204022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.494 [2024-07-15 16:06:19.204032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6de70 is same with the state(5) to be set 00:20:25.494 16:06:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.494 16:06:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # sleep 1 00:20:25.495 [2024-07-15 16:06:19.211742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:25.495 [2024-07-15 16:06:19.211776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.495 [2024-07-15 16:06:19.211806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:25.495 [2024-07-15 16:06:19.211815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.495 [2024-07-15 16:06:19.211841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:25.495 [2024-07-15 16:06:19.211850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.495 [2024-07-15 16:06:19.211859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:25.495 [2024-07-15 16:06:19.211868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.495 [2024-07-15 16:06:19.211877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f27140 is same with the state(5) to be set 00:20:25.495 [2024-07-15 16:06:19.213818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f6de70 (9): Bad file descriptor 00:20:25.756 [2024-07-15 16:06:19.221711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f27140 (9): Bad file descriptor 00:20:25.756 [2024-07-15 16:06:19.223856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:25.756 [2024-07-15 16:06:19.224169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:25.756 [2024-07-15 16:06:19.224310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f6de70 with addr=10.0.0.2, port=4420 00:20:25.756 [2024-07-15 16:06:19.224448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6de70 is same with the state(5) to be set 00:20:25.756 [2024-07-15 16:06:19.224602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f6de70 (9): Bad file descriptor 00:20:25.756 [2024-07-15 16:06:19.224719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:25.756 [2024-07-15 16:06:19.224868] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:25.756 [2024-07-15 16:06:19.224933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:25.756 [2024-07-15 16:06:19.225046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:25.756 [2024-07-15 16:06:19.231736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:25.756 [2024-07-15 16:06:19.232005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:25.756 [2024-07-15 16:06:19.232031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f27140 with addr=10.0.0.3, port=4420 00:20:25.756 [2024-07-15 16:06:19.232043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f27140 is same with the state(5) to be set 00:20:25.756 [2024-07-15 16:06:19.232061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f27140 (9): Bad file descriptor 00:20:25.756 [2024-07-15 16:06:19.232076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:25.756 [2024-07-15 16:06:19.232085] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:25.756 [2024-07-15 16:06:19.232095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:25.756 [2024-07-15 16:06:19.232111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:25.756 [2024-07-15 16:06:19.234096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:25.756 [2024-07-15 16:06:19.234185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:25.756 [2024-07-15 16:06:19.234207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f6de70 with addr=10.0.0.2, port=4420 00:20:25.756 [2024-07-15 16:06:19.234218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6de70 is same with the state(5) to be set 00:20:25.756 [2024-07-15 16:06:19.234249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f6de70 (9): Bad file descriptor 00:20:25.756 [2024-07-15 16:06:19.234278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:25.756 [2024-07-15 16:06:19.234286] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:25.756 [2024-07-15 16:06:19.234295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:25.756 [2024-07-15 16:06:19.234309] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:25.756 [2024-07-15 16:06:19.241941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:25.756 [2024-07-15 16:06:19.242042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:25.756 [2024-07-15 16:06:19.242064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f27140 with addr=10.0.0.3, port=4420 00:20:25.756 [2024-07-15 16:06:19.242075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f27140 is same with the state(5) to be set 00:20:25.756 [2024-07-15 16:06:19.242091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f27140 (9): Bad file descriptor 00:20:25.756 [2024-07-15 16:06:19.242105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:25.756 [2024-07-15 16:06:19.242114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:25.756 [2024-07-15 16:06:19.242123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:25.756 [2024-07-15 16:06:19.242139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:25.756 [2024-07-15 16:06:19.244152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:25.756 [2024-07-15 16:06:19.244229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:25.756 [2024-07-15 16:06:19.244249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f6de70 with addr=10.0.0.2, port=4420 00:20:25.756 [2024-07-15 16:06:19.244259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6de70 is same with the state(5) to be set 00:20:25.756 [2024-07-15 16:06:19.244274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f6de70 (9): Bad file descriptor 00:20:25.756 [2024-07-15 16:06:19.244287] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:25.756 [2024-07-15 16:06:19.244295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:25.756 [2024-07-15 16:06:19.244304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:25.756 [2024-07-15 16:06:19.244318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:25.756 [2024-07-15 16:06:19.252017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:25.756 [2024-07-15 16:06:19.252103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:25.756 [2024-07-15 16:06:19.252124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f27140 with addr=10.0.0.3, port=4420 00:20:25.756 [2024-07-15 16:06:19.252134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f27140 is same with the state(5) to be set 00:20:25.756 [2024-07-15 16:06:19.252150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f27140 (9): Bad file descriptor 00:20:25.756 [2024-07-15 16:06:19.252163] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:25.756 [2024-07-15 16:06:19.252171] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:25.756 [2024-07-15 16:06:19.252180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:25.756 [2024-07-15 16:06:19.252195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:25.756 [2024-07-15 16:06:19.254202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:25.756 [2024-07-15 16:06:19.254328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:25.756 [2024-07-15 16:06:19.254348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f6de70 with addr=10.0.0.2, port=4420 00:20:25.756 [2024-07-15 16:06:19.254358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6de70 is same with the state(5) to be set 00:20:25.756 [2024-07-15 16:06:19.254373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f6de70 (9): Bad file descriptor 00:20:25.756 [2024-07-15 16:06:19.254387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:25.756 [2024-07-15 16:06:19.254395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:25.757 [2024-07-15 16:06:19.254403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:25.757 [2024-07-15 16:06:19.254417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:25.757 [2024-07-15 16:06:19.262076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:25.757 [2024-07-15 16:06:19.262161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:25.757 [2024-07-15 16:06:19.262182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f27140 with addr=10.0.0.3, port=4420 00:20:25.757 [2024-07-15 16:06:19.262193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f27140 is same with the state(5) to be set 00:20:25.757 [2024-07-15 16:06:19.262209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f27140 (9): Bad file descriptor 00:20:25.757 [2024-07-15 16:06:19.262237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:25.757 [2024-07-15 16:06:19.262260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:25.757 [2024-07-15 16:06:19.262269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:25.757 [2024-07-15 16:06:19.262283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:25.757 [2024-07-15 16:06:19.264283] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:25.757 [2024-07-15 16:06:19.264369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:25.757 [2024-07-15 16:06:19.264389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f6de70 with addr=10.0.0.2, port=4420 00:20:25.757 [2024-07-15 16:06:19.264400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6de70 is same with the state(5) to be set 00:20:25.757 [2024-07-15 16:06:19.264416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f6de70 (9): Bad file descriptor 00:20:25.757 [2024-07-15 16:06:19.264429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:25.757 [2024-07-15 16:06:19.264438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:25.757 [2024-07-15 16:06:19.264447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:25.757 [2024-07-15 16:06:19.264462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:25.757 [2024-07-15 16:06:19.272130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:25.757 [2024-07-15 16:06:19.272224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:25.757 [2024-07-15 16:06:19.272245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f27140 with addr=10.0.0.3, port=4420 00:20:25.757 [2024-07-15 16:06:19.272255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f27140 is same with the state(5) to be set 00:20:25.757 [2024-07-15 16:06:19.272270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f27140 (9): Bad file descriptor 00:20:25.757 [2024-07-15 16:06:19.272283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:25.757 [2024-07-15 16:06:19.272293] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:25.757 [2024-07-15 16:06:19.272302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:25.757 [2024-07-15 16:06:19.272316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:25.757 [2024-07-15 16:06:19.274334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:25.757 [2024-07-15 16:06:19.274420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:25.757 [2024-07-15 16:06:19.274441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f6de70 with addr=10.0.0.2, port=4420 00:20:25.757 [2024-07-15 16:06:19.274452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6de70 is same with the state(5) to be set 00:20:25.757 [2024-07-15 16:06:19.274475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f6de70 (9): Bad file descriptor 00:20:25.757 [2024-07-15 16:06:19.274489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:25.757 [2024-07-15 16:06:19.274498] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:25.757 [2024-07-15 16:06:19.274507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:25.757 [2024-07-15 16:06:19.274521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:25.757 [2024-07-15 16:06:19.282182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:25.757 [2024-07-15 16:06:19.282309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:25.757 [2024-07-15 16:06:19.282340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f27140 with addr=10.0.0.3, port=4420 00:20:25.757 [2024-07-15 16:06:19.282350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f27140 is same with the state(5) to be set 00:20:25.757 [2024-07-15 16:06:19.282366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f27140 (9): Bad file descriptor 00:20:25.757 [2024-07-15 16:06:19.282380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:25.757 [2024-07-15 16:06:19.282388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:25.757 [2024-07-15 16:06:19.282398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:25.757 [2024-07-15 16:06:19.282412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:25.757 [2024-07-15 16:06:19.284386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:25.757 [2024-07-15 16:06:19.284479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:25.757 [2024-07-15 16:06:19.284499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f6de70 with addr=10.0.0.2, port=4420 00:20:25.757 [2024-07-15 16:06:19.284509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6de70 is same with the state(5) to be set 00:20:25.757 [2024-07-15 16:06:19.284525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f6de70 (9): Bad file descriptor 00:20:25.757 [2024-07-15 16:06:19.284538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:25.757 [2024-07-15 16:06:19.284547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:25.757 [2024-07-15 16:06:19.284555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:25.757 [2024-07-15 16:06:19.284569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:25.757 [2024-07-15 16:06:19.292250] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:25.757 [2024-07-15 16:06:19.292378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:25.757 [2024-07-15 16:06:19.292400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f27140 with addr=10.0.0.3, port=4420 00:20:25.757 [2024-07-15 16:06:19.292411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f27140 is same with the state(5) to be set 00:20:25.757 [2024-07-15 16:06:19.292427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f27140 (9): Bad file descriptor 00:20:25.757 [2024-07-15 16:06:19.292441] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:25.757 [2024-07-15 16:06:19.292449] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:25.757 [2024-07-15 16:06:19.292465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:25.757 [2024-07-15 16:06:19.292479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:25.757 [2024-07-15 16:06:19.294446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:25.757 [2024-07-15 16:06:19.294550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:25.757 [2024-07-15 16:06:19.294572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f6de70 with addr=10.0.0.2, port=4420 00:20:25.757 [2024-07-15 16:06:19.294583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6de70 is same with the state(5) to be set 00:20:25.757 [2024-07-15 16:06:19.294599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f6de70 (9): Bad file descriptor 00:20:25.757 [2024-07-15 16:06:19.294613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:25.757 [2024-07-15 16:06:19.294622] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:25.757 [2024-07-15 16:06:19.294631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:25.757 [2024-07-15 16:06:19.294646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:25.757 [2024-07-15 16:06:19.302326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:25.757 [2024-07-15 16:06:19.302454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:25.757 [2024-07-15 16:06:19.302475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f27140 with addr=10.0.0.3, port=4420 00:20:25.757 [2024-07-15 16:06:19.302485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f27140 is same with the state(5) to be set 00:20:25.757 [2024-07-15 16:06:19.302500] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f27140 (9): Bad file descriptor 00:20:25.757 [2024-07-15 16:06:19.302514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:25.757 [2024-07-15 16:06:19.302522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:25.757 [2024-07-15 16:06:19.302531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:25.757 [2024-07-15 16:06:19.302546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:25.757 [2024-07-15 16:06:19.304517] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:25.757 [2024-07-15 16:06:19.304609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:25.757 [2024-07-15 16:06:19.304629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f6de70 with addr=10.0.0.2, port=4420 00:20:25.757 [2024-07-15 16:06:19.304639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6de70 is same with the state(5) to be set 00:20:25.757 [2024-07-15 16:06:19.304654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f6de70 (9): Bad file descriptor 00:20:25.757 [2024-07-15 16:06:19.304679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:25.757 [2024-07-15 16:06:19.304689] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:25.757 [2024-07-15 16:06:19.304698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:25.757 [2024-07-15 16:06:19.304711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:25.757 [2024-07-15 16:06:19.312399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:25.757 [2024-07-15 16:06:19.312495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:25.757 [2024-07-15 16:06:19.312515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f27140 with addr=10.0.0.3, port=4420 00:20:25.757 [2024-07-15 16:06:19.312525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f27140 is same with the state(5) to be set 00:20:25.758 [2024-07-15 16:06:19.312540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f27140 (9): Bad file descriptor 00:20:25.758 [2024-07-15 16:06:19.312553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:25.758 [2024-07-15 16:06:19.312561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:25.758 [2024-07-15 16:06:19.312570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:25.758 [2024-07-15 16:06:19.312584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:25.758 [2024-07-15 16:06:19.314565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:25.758 [2024-07-15 16:06:19.314659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:25.758 [2024-07-15 16:06:19.314679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f6de70 with addr=10.0.0.2, port=4420 00:20:25.758 [2024-07-15 16:06:19.314688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6de70 is same with the state(5) to be set 00:20:25.758 [2024-07-15 16:06:19.314703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f6de70 (9): Bad file descriptor 00:20:25.758 [2024-07-15 16:06:19.314716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:25.758 [2024-07-15 16:06:19.314724] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:25.758 [2024-07-15 16:06:19.314733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:25.758 [2024-07-15 16:06:19.314746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:25.758 [2024-07-15 16:06:19.322450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:25.758 [2024-07-15 16:06:19.322544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:25.758 [2024-07-15 16:06:19.322563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f27140 with addr=10.0.0.3, port=4420 00:20:25.758 [2024-07-15 16:06:19.322573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f27140 is same with the state(5) to be set 00:20:25.758 [2024-07-15 16:06:19.322588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f27140 (9): Bad file descriptor 00:20:25.758 [2024-07-15 16:06:19.322602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:25.758 [2024-07-15 16:06:19.322610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:25.758 [2024-07-15 16:06:19.322619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:25.758 [2024-07-15 16:06:19.322633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:25.758 [2024-07-15 16:06:19.324614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:25.758 [2024-07-15 16:06:19.324704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:25.758 [2024-07-15 16:06:19.324724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f6de70 with addr=10.0.0.2, port=4420 00:20:25.758 [2024-07-15 16:06:19.324734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6de70 is same with the state(5) to be set 00:20:25.758 [2024-07-15 16:06:19.324759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f6de70 (9): Bad file descriptor 00:20:25.758 [2024-07-15 16:06:19.324774] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:25.758 [2024-07-15 16:06:19.324783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:25.758 [2024-07-15 16:06:19.324791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:25.758 [2024-07-15 16:06:19.324821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:25.758 [2024-07-15 16:06:19.332500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:25.758 [2024-07-15 16:06:19.332594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:25.758 [2024-07-15 16:06:19.332614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f27140 with addr=10.0.0.3, port=4420 00:20:25.758 [2024-07-15 16:06:19.332624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f27140 is same with the state(5) to be set 00:20:25.758 [2024-07-15 16:06:19.332639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f27140 (9): Bad file descriptor 00:20:25.758 [2024-07-15 16:06:19.332652] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:25.758 [2024-07-15 16:06:19.332660] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:25.758 [2024-07-15 16:06:19.332669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:25.758 [2024-07-15 16:06:19.332683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:25.758 [2024-07-15 16:06:19.334662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:25.758 [2024-07-15 16:06:19.334756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:25.758 [2024-07-15 16:06:19.334776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f6de70 with addr=10.0.0.2, port=4420 00:20:25.758 [2024-07-15 16:06:19.334786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6de70 is same with the state(5) to be set 00:20:25.758 [2024-07-15 16:06:19.334800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f6de70 (9): Bad file descriptor 00:20:25.758 [2024-07-15 16:06:19.334851] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:20:25.758 [2024-07-15 16:06:19.334880] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:20:25.758 [2024-07-15 16:06:19.334913] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:25.758 [2024-07-15 16:06:19.334949] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:20:25.758 [2024-07-15 16:06:19.334985] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:25.758 [2024-07-15 16:06:19.335003] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:25.758 [2024-07-15 16:06:19.335036] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:25.758 [2024-07-15 16:06:19.335048] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:25.758 [2024-07-15 16:06:19.335058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:25.758 [2024-07-15 16:06:19.335082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:25.758 [2024-07-15 16:06:19.420937] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:20:25.758 [2024-07-15 16:06:19.421958] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:26.692 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:20:26.692 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:26.692 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:26.692 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.692 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:26.692 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:26.692 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:26.692 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.692 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:20:26.692 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:20:26.692 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:26.692 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:26.692 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:26.692 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:26.692 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.692 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:26.692 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.692 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:26.692 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:20:26.692 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:20:26.692 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:26.692 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.692 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:26.692 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:26.692 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:26.692 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.692 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:20:26.692 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:20:26.692 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:20:26.692 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.692 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:26.692 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:26.692 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:26.692 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:26.950 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.950 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:20:26.950 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:20:26.950 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:20:26.950 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:20:26.950 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.950 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:26.950 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.950 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:20:26.950 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:20:26.950 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:20:26.950 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:20:26.950 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.950 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:26.950 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.950 16:06:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # sleep 1 00:20:26.950 [2024-07-15 16:06:20.599107] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:20:27.884 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:20:27.884 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:20:27.884 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:20:27.884 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.884 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.884 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:20:27.884 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:20:27.884 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.884 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:20:27.884 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:20:27.884 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:27.884 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:27.884 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:27.884 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:27.884 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.884 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.884 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.143 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:20:28.143 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:20:28.143 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:28.143 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:28.143 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.143 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:28.143 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:28.143 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:28.143 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.143 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:20:28.143 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:20:28.143 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:20:28.143 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:20:28.143 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.143 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:28.143 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.144 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=4 00:20:28.144 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=8 00:20:28.144 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:20:28.144 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:28.144 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.144 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:28.144 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.144 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:20:28.144 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:20:28.144 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:20:28.144 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:28.144 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:28.144 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:28.144 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:28.144 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:20:28.144 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.144 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:28.144 [2024-07-15 16:06:21.753530] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:20:28.144 2024/07/15 16:06:21 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:20:28.144 request: 00:20:28.144 { 00:20:28.144 "method": "bdev_nvme_start_mdns_discovery", 00:20:28.144 "params": { 00:20:28.144 "name": "mdns", 00:20:28.144 "svcname": "_nvme-disc._http", 00:20:28.144 "hostnqn": "nqn.2021-12.io.spdk:test" 00:20:28.144 } 00:20:28.144 } 00:20:28.144 Got JSON-RPC error response 00:20:28.144 GoRPCClient: error on JSON-RPC call 00:20:28.144 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:28.144 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:20:28.144 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:28.144 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:28.144 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:28.144 16:06:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # sleep 5 00:20:28.711 [2024-07-15 16:06:22.342347] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:20:28.970 [2024-07-15 16:06:22.442345] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:20:28.970 [2024-07-15 16:06:22.542348] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:20:28.970 [2024-07-15 16:06:22.542417] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:20:28.970 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:28.970 cookie is 0 00:20:28.970 is_local: 1 00:20:28.970 our_own: 0 00:20:28.970 wide_area: 0 00:20:28.970 multicast: 1 00:20:28.970 cached: 1 00:20:28.970 [2024-07-15 16:06:22.642341] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:20:28.970 [2024-07-15 16:06:22.642411] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:20:28.970 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:28.970 cookie is 0 00:20:28.970 is_local: 1 00:20:28.970 our_own: 0 00:20:28.970 wide_area: 0 00:20:28.970 multicast: 1 00:20:28.970 cached: 1 00:20:28.970 [2024-07-15 16:06:22.642426] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:20:29.228 [2024-07-15 16:06:22.742393] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:20:29.228 [2024-07-15 16:06:22.742444] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:20:29.228 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:29.228 cookie is 0 00:20:29.228 is_local: 1 00:20:29.228 our_own: 0 00:20:29.228 wide_area: 0 00:20:29.228 multicast: 1 00:20:29.228 cached: 1 00:20:29.228 [2024-07-15 16:06:22.842333] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:20:29.229 [2024-07-15 16:06:22.842394] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:20:29.229 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:29.229 cookie is 0 00:20:29.229 is_local: 1 00:20:29.229 our_own: 0 00:20:29.229 wide_area: 0 00:20:29.229 multicast: 1 00:20:29.229 cached: 1 00:20:29.229 [2024-07-15 16:06:22.842407] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:20:30.162 [2024-07-15 16:06:23.553000] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:30.162 [2024-07-15 16:06:23.553070] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:30.162 [2024-07-15 16:06:23.553089] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:30.162 [2024-07-15 16:06:23.640141] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:20:30.162 [2024-07-15 16:06:23.700663] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:20:30.162 [2024-07-15 16:06:23.700727] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:20:30.162 [2024-07-15 16:06:23.752867] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:30.162 [2024-07-15 16:06:23.752904] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:30.162 [2024-07-15 16:06:23.752939] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:30.162 [2024-07-15 16:06:23.840029] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:20:30.421 [2024-07-15 16:06:23.900566] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:20:30.421 [2024-07-15 16:06:23.900606] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.698 [2024-07-15 16:06:26.933933] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:20:33.698 2024/07/15 16:06:26 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:20:33.698 request: 00:20:33.698 { 00:20:33.698 "method": "bdev_nvme_start_mdns_discovery", 00:20:33.698 "params": { 00:20:33.698 "name": "cdc", 00:20:33.698 "svcname": "_nvme-disc._tcp", 00:20:33.698 "hostnqn": "nqn.2021-12.io.spdk:test" 00:20:33.698 } 00:20:33.698 } 00:20:33.698 Got JSON-RPC error response 00:20:33.698 GoRPCClient: error on JSON-RPC call 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:33.698 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:33.699 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:33.699 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:20:33.699 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:33.699 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:20:33.699 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:20:33.699 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.699 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:20:33.699 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.699 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.699 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:20:33.699 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:20:33.699 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:33.699 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:33.699 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:33.699 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.699 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:33.699 16:06:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.699 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.699 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:33.699 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:20:33.699 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.699 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.699 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.699 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_stop_mdns_prr 00:20:33.699 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.699 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.699 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.699 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # trap - SIGINT SIGTERM EXIT 00:20:33.699 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # kill 94675 00:20:33.699 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # wait 94675 00:20:33.699 [2024-07-15 16:06:27.194074] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:20:33.699 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # kill 94707 00:20:33.699 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # nvmftestfini 00:20:33.699 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:33.699 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@117 -- # sync 00:20:33.699 Got SIGTERM, quitting. 00:20:33.699 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:20:33.699 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:20:33.699 avahi-daemon 0.8 exiting. 00:20:33.699 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:33.699 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@120 -- # set +e 00:20:33.699 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:33.699 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:33.699 rmmod nvme_tcp 00:20:33.699 rmmod nvme_fabrics 00:20:33.699 rmmod nvme_keyring 00:20:33.699 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:33.956 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set -e 00:20:33.957 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # return 0 00:20:33.957 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@489 -- # '[' -n 94625 ']' 00:20:33.957 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@490 -- # killprocess 94625 00:20:33.957 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@948 -- # '[' -z 94625 ']' 00:20:33.957 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # kill -0 94625 00:20:33.957 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # uname 00:20:33.957 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:33.957 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94625 00:20:33.957 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:33.957 killing process with pid 94625 00:20:33.957 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:33.957 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94625' 00:20:33.957 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@967 -- # kill 94625 00:20:33.957 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # wait 94625 00:20:33.957 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:33.957 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:33.957 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:33.957 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:33.957 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:33.957 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.957 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:33.957 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.215 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:34.215 00:20:34.215 real 0m20.671s 00:20:34.215 user 0m40.421s 00:20:34.215 sys 0m2.102s 00:20:34.215 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:34.215 16:06:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:34.215 ************************************ 00:20:34.215 END TEST nvmf_mdns_discovery 00:20:34.215 ************************************ 00:20:34.215 16:06:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:34.215 16:06:27 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:20:34.215 16:06:27 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:20:34.215 16:06:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:34.215 16:06:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:34.215 16:06:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:34.215 ************************************ 00:20:34.215 START TEST nvmf_host_multipath 00:20:34.215 ************************************ 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:20:34.215 * Looking for test storage... 00:20:34.215 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:34.215 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:34.216 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:34.216 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:34.216 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:34.216 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:34.216 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:34.216 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:34.216 Cannot find device "nvmf_tgt_br" 00:20:34.216 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:20:34.216 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:34.216 Cannot find device "nvmf_tgt_br2" 00:20:34.216 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:20:34.216 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:34.216 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:34.216 Cannot find device "nvmf_tgt_br" 00:20:34.216 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:20:34.216 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:34.473 Cannot find device "nvmf_tgt_br2" 00:20:34.473 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:20:34.473 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:34.473 16:06:27 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:34.473 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:34.473 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:34.473 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:20:34.473 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:34.473 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:34.473 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:20:34.473 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:34.473 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:34.473 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:34.473 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:34.473 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:34.473 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:34.473 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:34.473 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:34.473 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:34.473 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:34.474 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:34.474 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:34.474 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:34.474 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:34.474 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:34.474 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:34.474 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:34.474 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:34.474 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:34.474 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:34.474 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:34.474 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:34.474 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:34.474 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:34.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:34.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:20:34.474 00:20:34.474 --- 10.0.0.2 ping statistics --- 00:20:34.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.474 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:20:34.474 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:34.474 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:34.474 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:20:34.474 00:20:34.474 --- 10.0.0.3 ping statistics --- 00:20:34.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.474 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:20:34.732 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:34.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:34.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:20:34.732 00:20:34.732 --- 10.0.0.1 ping statistics --- 00:20:34.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.732 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:20:34.732 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:34.732 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:20:34.732 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:34.732 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:34.732 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:34.732 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:34.732 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:34.732 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:34.732 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:34.732 16:06:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:20:34.732 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:34.732 16:06:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:34.732 16:06:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:34.732 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=95262 00:20:34.732 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 95262 00:20:34.732 16:06:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:34.732 16:06:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 95262 ']' 00:20:34.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.732 16:06:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.732 16:06:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:34.732 16:06:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.732 16:06:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:34.732 16:06:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:34.732 [2024-07-15 16:06:28.298549] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:20:34.732 [2024-07-15 16:06:28.298648] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.732 [2024-07-15 16:06:28.440113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:34.995 [2024-07-15 16:06:28.566655] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:34.995 [2024-07-15 16:06:28.566718] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:34.995 [2024-07-15 16:06:28.566733] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:34.995 [2024-07-15 16:06:28.566744] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:34.995 [2024-07-15 16:06:28.566753] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:34.995 [2024-07-15 16:06:28.567369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.995 [2024-07-15 16:06:28.567421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.929 16:06:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:35.929 16:06:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:20:35.929 16:06:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:35.929 16:06:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:35.929 16:06:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:35.929 16:06:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.929 16:06:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=95262 00:20:35.929 16:06:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:35.929 [2024-07-15 16:06:29.586090] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:35.929 16:06:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:36.186 Malloc0 00:20:36.186 16:06:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:20:36.442 16:06:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:36.699 16:06:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:36.956 [2024-07-15 16:06:30.517740] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:36.956 16:06:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:37.213 [2024-07-15 16:06:30.749919] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:37.213 16:06:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=95360 00:20:37.213 16:06:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:20:37.213 16:06:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:37.213 16:06:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 95360 /var/tmp/bdevperf.sock 00:20:37.213 16:06:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 95360 ']' 00:20:37.213 16:06:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:37.213 16:06:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:37.213 16:06:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:37.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:37.213 16:06:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:37.213 16:06:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:38.145 16:06:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:38.145 16:06:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:20:38.145 16:06:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:38.401 16:06:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:20:38.659 Nvme0n1 00:20:38.659 16:06:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:39.223 Nvme0n1 00:20:39.223 16:06:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:20:39.223 16:06:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:20:40.153 16:06:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:20:40.153 16:06:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:40.410 16:06:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:20:40.713 16:06:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:20:40.713 16:06:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95262 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:40.713 16:06:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95443 00:20:40.713 16:06:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:47.271 16:06:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:47.271 16:06:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:47.271 16:06:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:47.271 16:06:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:47.271 Attaching 4 probes... 00:20:47.271 @path[10.0.0.2, 4421]: 17760 00:20:47.271 @path[10.0.0.2, 4421]: 18232 00:20:47.271 @path[10.0.0.2, 4421]: 18294 00:20:47.271 @path[10.0.0.2, 4421]: 18051 00:20:47.271 @path[10.0.0.2, 4421]: 18045 00:20:47.271 16:06:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:47.271 16:06:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:47.271 16:06:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:47.271 16:06:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:47.271 16:06:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:47.271 16:06:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:47.271 16:06:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95443 00:20:47.271 16:06:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:47.271 16:06:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:20:47.271 16:06:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:47.271 16:06:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:20:47.529 16:06:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:20:47.529 16:06:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95262 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:47.529 16:06:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95579 00:20:47.529 16:06:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:54.082 16:06:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:54.082 16:06:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:20:54.083 16:06:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:20:54.083 16:06:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:54.083 Attaching 4 probes... 00:20:54.083 @path[10.0.0.2, 4420]: 17213 00:20:54.083 @path[10.0.0.2, 4420]: 17176 00:20:54.083 @path[10.0.0.2, 4420]: 17305 00:20:54.083 @path[10.0.0.2, 4420]: 17462 00:20:54.083 @path[10.0.0.2, 4420]: 17508 00:20:54.083 16:06:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:54.083 16:06:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:54.083 16:06:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:54.083 16:06:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:20:54.083 16:06:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:20:54.083 16:06:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:20:54.083 16:06:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95579 00:20:54.083 16:06:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:54.083 16:06:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:20:54.083 16:06:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:20:54.083 16:06:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:20:54.341 16:06:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:20:54.341 16:06:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95262 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:54.341 16:06:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95714 00:20:54.341 16:06:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:00.896 16:06:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:00.896 16:06:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:00.896 16:06:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:00.896 16:06:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:00.896 Attaching 4 probes... 00:21:00.896 @path[10.0.0.2, 4421]: 13767 00:21:00.896 @path[10.0.0.2, 4421]: 17825 00:21:00.896 @path[10.0.0.2, 4421]: 18050 00:21:00.896 @path[10.0.0.2, 4421]: 18553 00:21:00.896 @path[10.0.0.2, 4421]: 17717 00:21:00.896 16:06:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:00.896 16:06:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:00.896 16:06:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:00.896 16:06:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:00.896 16:06:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:00.896 16:06:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:00.896 16:06:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95714 00:21:00.896 16:06:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:00.896 16:06:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:21:00.896 16:06:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:00.896 16:06:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:01.154 16:06:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:21:01.154 16:06:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95846 00:21:01.154 16:06:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95262 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:01.154 16:06:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:07.792 16:07:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:07.792 16:07:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:21:07.792 16:07:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:21:07.792 16:07:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:07.792 Attaching 4 probes... 00:21:07.792 00:21:07.792 00:21:07.792 00:21:07.792 00:21:07.792 00:21:07.792 16:07:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:07.792 16:07:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:07.792 16:07:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:07.792 16:07:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:21:07.792 16:07:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:21:07.792 16:07:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:21:07.792 16:07:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95846 00:21:07.792 16:07:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:07.792 16:07:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:21:07.792 16:07:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:07.792 16:07:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:07.792 16:07:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:21:07.792 16:07:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95262 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:07.792 16:07:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95975 00:21:07.792 16:07:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:14.352 16:07:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:14.352 16:07:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:14.352 16:07:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:14.352 16:07:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:14.352 Attaching 4 probes... 00:21:14.352 @path[10.0.0.2, 4421]: 16987 00:21:14.352 @path[10.0.0.2, 4421]: 17540 00:21:14.352 @path[10.0.0.2, 4421]: 17775 00:21:14.352 @path[10.0.0.2, 4421]: 18091 00:21:14.352 @path[10.0.0.2, 4421]: 17550 00:21:14.352 16:07:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:14.352 16:07:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:14.352 16:07:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:14.352 16:07:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:14.352 16:07:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:14.352 16:07:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:14.352 16:07:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95975 00:21:14.352 16:07:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:14.352 16:07:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:14.352 [2024-07-15 16:07:07.983670] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebc330 is same with the state(5) to be set 00:21:14.352 [2024-07-15 16:07:07.983756] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebc330 is same with the state(5) to be set 00:21:14.352 [2024-07-15 16:07:07.983768] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebc330 is same with the state(5) to be set 00:21:14.352 [2024-07-15 16:07:07.983777] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebc330 is same with the state(5) to be set 00:21:14.352 [2024-07-15 16:07:07.983786] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebc330 is same with the state(5) to be set 00:21:14.352 [2024-07-15 16:07:07.983794] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebc330 is same with the state(5) to be set 00:21:14.352 [2024-07-15 16:07:07.983804] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebc330 is same with the state(5) to be set 00:21:14.352 [2024-07-15 16:07:07.983812] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebc330 is same with the state(5) to be set 00:21:14.352 16:07:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:21:15.284 16:07:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:21:15.284 16:07:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96115 00:21:15.284 16:07:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95262 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:15.284 16:07:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:21.859 16:07:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:21.859 16:07:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:21.859 16:07:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:21.859 16:07:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:21.859 Attaching 4 probes... 00:21:21.859 @path[10.0.0.2, 4420]: 16825 00:21:21.859 @path[10.0.0.2, 4420]: 16796 00:21:21.859 @path[10.0.0.2, 4420]: 16981 00:21:21.859 @path[10.0.0.2, 4420]: 17092 00:21:21.859 @path[10.0.0.2, 4420]: 16869 00:21:21.859 16:07:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:21.859 16:07:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:21.859 16:07:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:21.859 16:07:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:21.859 16:07:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:21.859 16:07:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:21.859 16:07:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96115 00:21:21.859 16:07:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:21.859 16:07:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:21.859 [2024-07-15 16:07:15.572610] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:22.117 16:07:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:22.376 16:07:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:21:28.930 16:07:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:21:28.930 16:07:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96302 00:21:28.930 16:07:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:28.930 16:07:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95262 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:34.249 16:07:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:34.249 16:07:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:34.507 16:07:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:34.507 16:07:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:34.507 Attaching 4 probes... 00:21:34.507 @path[10.0.0.2, 4421]: 16750 00:21:34.507 @path[10.0.0.2, 4421]: 17289 00:21:34.507 @path[10.0.0.2, 4421]: 17317 00:21:34.507 @path[10.0.0.2, 4421]: 17181 00:21:34.507 @path[10.0.0.2, 4421]: 17112 00:21:34.507 16:07:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:34.507 16:07:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:34.507 16:07:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:34.507 16:07:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:34.507 16:07:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:34.507 16:07:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:34.507 16:07:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96302 00:21:34.507 16:07:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:34.507 16:07:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 95360 00:21:34.507 16:07:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 95360 ']' 00:21:34.507 16:07:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 95360 00:21:34.507 16:07:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:21:34.507 16:07:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:34.507 16:07:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95360 00:21:34.507 killing process with pid 95360 00:21:34.507 16:07:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:34.507 16:07:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:34.507 16:07:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95360' 00:21:34.507 16:07:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 95360 00:21:34.507 16:07:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 95360 00:21:34.773 Connection closed with partial response: 00:21:34.773 00:21:34.773 00:21:34.773 16:07:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 95360 00:21:34.773 16:07:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:34.773 [2024-07-15 16:06:30.816822] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:21:34.773 [2024-07-15 16:06:30.816911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95360 ] 00:21:34.773 [2024-07-15 16:06:30.954245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.773 [2024-07-15 16:06:31.077859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:34.773 Running I/O for 90 seconds... 00:21:34.773 [2024-07-15 16:06:41.055995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.773 [2024-07-15 16:06:41.056076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.773 [2024-07-15 16:06:41.056136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.773 [2024-07-15 16:06:41.056157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.773 [2024-07-15 16:06:41.056180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.773 [2024-07-15 16:06:41.056196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:34.773 [2024-07-15 16:06:41.056217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.773 [2024-07-15 16:06:41.056232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:34.773 [2024-07-15 16:06:41.056252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.773 [2024-07-15 16:06:41.056267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:34.773 [2024-07-15 16:06:41.056288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.773 [2024-07-15 16:06:41.056303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:34.773 [2024-07-15 16:06:41.056323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.773 [2024-07-15 16:06:41.056338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:34.773 [2024-07-15 16:06:41.056359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.773 [2024-07-15 16:06:41.056373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:34.773 [2024-07-15 16:06:41.057513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.773 [2024-07-15 16:06:41.057547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:34.773 [2024-07-15 16:06:41.057575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.773 [2024-07-15 16:06:41.057592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:34.773 [2024-07-15 16:06:41.057613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.773 [2024-07-15 16:06:41.057649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:34.773 [2024-07-15 16:06:41.057673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:74296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.773 [2024-07-15 16:06:41.057688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:34.773 [2024-07-15 16:06:41.057709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:74304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.773 [2024-07-15 16:06:41.057723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:34.773 [2024-07-15 16:06:41.057744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:74312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.773 [2024-07-15 16:06:41.057758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:34.773 [2024-07-15 16:06:41.057779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:74320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.773 [2024-07-15 16:06:41.057793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:34.773 [2024-07-15 16:06:41.057815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:74328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.773 [2024-07-15 16:06:41.057830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:34.773 [2024-07-15 16:06:41.057850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:74336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.773 [2024-07-15 16:06:41.057864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:34.773 [2024-07-15 16:06:41.057884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:74344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.773 [2024-07-15 16:06:41.057910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:34.773 [2024-07-15 16:06:41.057932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.773 [2024-07-15 16:06:41.057946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:34.773 [2024-07-15 16:06:41.057980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.773 [2024-07-15 16:06:41.057997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:34.773 [2024-07-15 16:06:41.058017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.773 [2024-07-15 16:06:41.058031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:34.773 [2024-07-15 16:06:41.058052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.773 [2024-07-15 16:06:41.058066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:34.773 [2024-07-15 16:06:41.058087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.773 [2024-07-15 16:06:41.058101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:34.773 [2024-07-15 16:06:41.061202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.773 [2024-07-15 16:06:41.061238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:34.773 [2024-07-15 16:06:41.061267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.773 [2024-07-15 16:06:41.061284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:34.773 [2024-07-15 16:06:41.061306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.773 [2024-07-15 16:06:41.061320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:34.773 [2024-07-15 16:06:41.061341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.773 [2024-07-15 16:06:41.061356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:34.773 [2024-07-15 16:06:41.061377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.773 [2024-07-15 16:06:41.061392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:34.773 [2024-07-15 16:06:41.061412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.773 [2024-07-15 16:06:41.061427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:34.773 [2024-07-15 16:06:41.061447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.773 [2024-07-15 16:06:41.061462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:34.773 [2024-07-15 16:06:41.061482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.773 [2024-07-15 16:06:41.061496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:34.773 [2024-07-15 16:06:41.061517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.774 [2024-07-15 16:06:41.061531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.061552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.774 [2024-07-15 16:06:41.061566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.061586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:75072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.774 [2024-07-15 16:06:41.061601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.061621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:75080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.774 [2024-07-15 16:06:41.061635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.061668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.774 [2024-07-15 16:06:41.061684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.061705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.774 [2024-07-15 16:06:41.061719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.061740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:75104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.774 [2024-07-15 16:06:41.061754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.061775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.774 [2024-07-15 16:06:41.061789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.061810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.774 [2024-07-15 16:06:41.061824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.061844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:75128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.774 [2024-07-15 16:06:41.061858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.061879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.774 [2024-07-15 16:06:41.061907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.061930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.774 [2024-07-15 16:06:41.061945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.061979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.774 [2024-07-15 16:06:41.061996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.062018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.774 [2024-07-15 16:06:41.062032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.062053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:75168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.774 [2024-07-15 16:06:41.062068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.062089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:75176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.774 [2024-07-15 16:06:41.062104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.062124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.774 [2024-07-15 16:06:41.062148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.062169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.774 [2024-07-15 16:06:41.062184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.062205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.774 [2024-07-15 16:06:41.062220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.062240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.774 [2024-07-15 16:06:41.062255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.062285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:75216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.774 [2024-07-15 16:06:41.062299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.062320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.774 [2024-07-15 16:06:41.062335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.062355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:75232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.774 [2024-07-15 16:06:41.062370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.062391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:75240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.774 [2024-07-15 16:06:41.062406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.062427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.774 [2024-07-15 16:06:41.062441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.062462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.774 [2024-07-15 16:06:41.062477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.062925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:74360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.774 [2024-07-15 16:06:41.062950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.062991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:74368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.774 [2024-07-15 16:06:41.063008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.063029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.774 [2024-07-15 16:06:41.063055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.063079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:74384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.774 [2024-07-15 16:06:41.063095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.063116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:74392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.774 [2024-07-15 16:06:41.063131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.063152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:74400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.774 [2024-07-15 16:06:41.063167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.063188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:74408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.774 [2024-07-15 16:06:41.063202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.063223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:74416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.774 [2024-07-15 16:06:41.063238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.063259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:74424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.774 [2024-07-15 16:06:41.063274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.063295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:74432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.774 [2024-07-15 16:06:41.063309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.063330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:74440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.774 [2024-07-15 16:06:41.063345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.063366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:74448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.774 [2024-07-15 16:06:41.063381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.063402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:74456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.774 [2024-07-15 16:06:41.063417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.063438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:74464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.774 [2024-07-15 16:06:41.063453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.063474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:74472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.774 [2024-07-15 16:06:41.063489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:34.774 [2024-07-15 16:06:41.063516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.774 [2024-07-15 16:06:41.063532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.063553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:74488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.775 [2024-07-15 16:06:41.063568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.063588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:74496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.775 [2024-07-15 16:06:41.063603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.063624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:74504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.775 [2024-07-15 16:06:41.063639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.063660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:74512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.775 [2024-07-15 16:06:41.063674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.063695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:74520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.775 [2024-07-15 16:06:41.063711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.063732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:74528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.775 [2024-07-15 16:06:41.063747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.063768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:74536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.775 [2024-07-15 16:06:41.063783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.063803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:74544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.775 [2024-07-15 16:06:41.063818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.063839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:74552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.775 [2024-07-15 16:06:41.063853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.063874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:74560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.775 [2024-07-15 16:06:41.063888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.063909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:74568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.775 [2024-07-15 16:06:41.063923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.063950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.775 [2024-07-15 16:06:41.063978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.064001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.775 [2024-07-15 16:06:41.064017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.064038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:74592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.775 [2024-07-15 16:06:41.064053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.067454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:74600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.775 [2024-07-15 16:06:41.067491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.067520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:74608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.775 [2024-07-15 16:06:41.067538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.067560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:74616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.775 [2024-07-15 16:06:41.067574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.067595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:74624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.775 [2024-07-15 16:06:41.067609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.067631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:74632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.775 [2024-07-15 16:06:41.067645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.067666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:74640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.775 [2024-07-15 16:06:41.067680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.067701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.775 [2024-07-15 16:06:41.067716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.067737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:74656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.775 [2024-07-15 16:06:41.067751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.067772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:74664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.775 [2024-07-15 16:06:41.067786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.067806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:74672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.775 [2024-07-15 16:06:41.067833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.067855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:74680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.775 [2024-07-15 16:06:41.067870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.067891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:74688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.775 [2024-07-15 16:06:41.067905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.067926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:74696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.775 [2024-07-15 16:06:41.067940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.067975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:74704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.775 [2024-07-15 16:06:41.067993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.068014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:74712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.775 [2024-07-15 16:06:41.068029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.068050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:74720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.775 [2024-07-15 16:06:41.068064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.068085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.775 [2024-07-15 16:06:41.068099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.068120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:74736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.775 [2024-07-15 16:06:41.068134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.068155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:74744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.775 [2024-07-15 16:06:41.068169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.068189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:74752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.775 [2024-07-15 16:06:41.068204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.068225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:74760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.775 [2024-07-15 16:06:41.068239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.068259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:74768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.775 [2024-07-15 16:06:41.068282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.068303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:74776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.775 [2024-07-15 16:06:41.068318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.068339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:74784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.775 [2024-07-15 16:06:41.068353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.068373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:74792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.775 [2024-07-15 16:06:41.068388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.068409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:75256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.775 [2024-07-15 16:06:41.068423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.068444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:75264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.775 [2024-07-15 16:06:41.068459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:34.775 [2024-07-15 16:06:41.068479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.775 [2024-07-15 16:06:41.068493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:41.068514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:75280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.776 [2024-07-15 16:06:41.068528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:41.068549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.776 [2024-07-15 16:06:41.068564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:41.068585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.776 [2024-07-15 16:06:41.068599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:41.068620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.776 [2024-07-15 16:06:41.068635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:41.070973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:75312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.776 [2024-07-15 16:06:41.071009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:41.071039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:74800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.776 [2024-07-15 16:06:41.071067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:41.071091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:74808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.776 [2024-07-15 16:06:41.071106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:41.071127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.776 [2024-07-15 16:06:41.071142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:41.071162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:74824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.776 [2024-07-15 16:06:41.071177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:41.071197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:74832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.776 [2024-07-15 16:06:41.071211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:41.071232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:74840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.776 [2024-07-15 16:06:41.071246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:41.071267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:74848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.776 [2024-07-15 16:06:41.071281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:41.071302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:74856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.776 [2024-07-15 16:06:41.071316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:47.630647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.776 [2024-07-15 16:06:47.630737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:47.630807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.776 [2024-07-15 16:06:47.630828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:47.630849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.776 [2024-07-15 16:06:47.630864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:47.630884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.776 [2024-07-15 16:06:47.630897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:47.630917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.776 [2024-07-15 16:06:47.630931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:47.630992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.776 [2024-07-15 16:06:47.631028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:47.631048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.776 [2024-07-15 16:06:47.631063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:47.631082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.776 [2024-07-15 16:06:47.631096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:47.631116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.776 [2024-07-15 16:06:47.631131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:47.631366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.776 [2024-07-15 16:06:47.631407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:47.631434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.776 [2024-07-15 16:06:47.631450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:47.631471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.776 [2024-07-15 16:06:47.631486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:47.631507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.776 [2024-07-15 16:06:47.631521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:47.631542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.776 [2024-07-15 16:06:47.631557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:47.631577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.776 [2024-07-15 16:06:47.631592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:47.631613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.776 [2024-07-15 16:06:47.631627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:47.631648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.776 [2024-07-15 16:06:47.631662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:47.631698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.776 [2024-07-15 16:06:47.631738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:47.631760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.776 [2024-07-15 16:06:47.631774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:47.631793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.776 [2024-07-15 16:06:47.631806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:47.631826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.776 [2024-07-15 16:06:47.631839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:47.631859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.776 [2024-07-15 16:06:47.631873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:47.631892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.776 [2024-07-15 16:06:47.631905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:47.631924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.776 [2024-07-15 16:06:47.631938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:47.631957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.776 [2024-07-15 16:06:47.631971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:47.631991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.776 [2024-07-15 16:06:47.632006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:47.632025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.776 [2024-07-15 16:06:47.632039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:47.632075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.776 [2024-07-15 16:06:47.632091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:34.776 [2024-07-15 16:06:47.632111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.776 [2024-07-15 16:06:47.632125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:34.777 [2024-07-15 16:06:47.632145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.777 [2024-07-15 16:06:47.632166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:34.777 [2024-07-15 16:06:47.632187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.777 [2024-07-15 16:06:47.632201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:34.777 [2024-07-15 16:06:47.632221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.777 [2024-07-15 16:06:47.632234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:34.777 [2024-07-15 16:06:47.632254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.777 [2024-07-15 16:06:47.632267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:34.777 [2024-07-15 16:06:47.632287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.777 [2024-07-15 16:06:47.632300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:34.777 [2024-07-15 16:06:47.632336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.777 [2024-07-15 16:06:47.632366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:34.777 [2024-07-15 16:06:47.632388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.777 [2024-07-15 16:06:47.632402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:34.777 [2024-07-15 16:06:47.632424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.777 [2024-07-15 16:06:47.632438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:34.777 [2024-07-15 16:06:47.632459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.777 [2024-07-15 16:06:47.632474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:34.777 [2024-07-15 16:06:47.632495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.777 [2024-07-15 16:06:47.632510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:34.777 [2024-07-15 16:06:47.632532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.777 [2024-07-15 16:06:47.632547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:34.777 [2024-07-15 16:06:47.632569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.777 [2024-07-15 16:06:47.632584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:34.777 [2024-07-15 16:06:47.634848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.777 [2024-07-15 16:06:47.634881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:34.777 [2024-07-15 16:06:47.634928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.777 [2024-07-15 16:06:47.634945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:34.777 [2024-07-15 16:06:47.634987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.777 [2024-07-15 16:06:47.635005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:34.777 [2024-07-15 16:06:47.635033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.777 [2024-07-15 16:06:47.635047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:34.777 [2024-07-15 16:06:47.635075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.777 [2024-07-15 16:06:47.635090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:34.777 [2024-07-15 16:06:47.635117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.777 [2024-07-15 16:06:47.635132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:34.777 [2024-07-15 16:06:47.635159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.777 [2024-07-15 16:06:47.635174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:34.777 [2024-07-15 16:06:47.635201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.777 [2024-07-15 16:06:47.635216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:34.777 [2024-07-15 16:06:47.635243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.777 [2024-07-15 16:06:47.635258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:34.777 [2024-07-15 16:06:47.635286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.777 [2024-07-15 16:06:47.635300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.777 [2024-07-15 16:06:47.635328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.777 [2024-07-15 16:06:47.635358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:34.777 [2024-07-15 16:06:47.635387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.777 [2024-07-15 16:06:47.635402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:34.777 [2024-07-15 16:06:47.635430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.777 [2024-07-15 16:06:47.635445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:34.777 [2024-07-15 16:06:47.635481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.777 [2024-07-15 16:06:47.635498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:34.777 [2024-07-15 16:06:47.635526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.777 [2024-07-15 16:06:47.635541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:34.777 [2024-07-15 16:06:47.635570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.777 [2024-07-15 16:06:47.635585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:34.777 [2024-07-15 16:06:47.635613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.777 [2024-07-15 16:06:47.635628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:34.777 [2024-07-15 16:06:47.635656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.777 [2024-07-15 16:06:47.635671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:47.635716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.778 [2024-07-15 16:06:47.635731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:47.635758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.778 [2024-07-15 16:06:47.635772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:47.635800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.778 [2024-07-15 16:06:47.635814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:47.635842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.778 [2024-07-15 16:06:47.635857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:47.635884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.778 [2024-07-15 16:06:47.635898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:47.635926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.778 [2024-07-15 16:06:47.635940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:47.635967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.778 [2024-07-15 16:06:47.635982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:47.636021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.778 [2024-07-15 16:06:47.636044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:47.636073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.778 [2024-07-15 16:06:47.636088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:54.688401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.778 [2024-07-15 16:06:54.688468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:54.688527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:46720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.778 [2024-07-15 16:06:54.688549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:54.688573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.778 [2024-07-15 16:06:54.688588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:54.688609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.778 [2024-07-15 16:06:54.688624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:54.688645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.778 [2024-07-15 16:06:54.688660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:54.688680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:46752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.778 [2024-07-15 16:06:54.688694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:54.688715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.778 [2024-07-15 16:06:54.688729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:54.688750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:46768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.778 [2024-07-15 16:06:54.688765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:54.688785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:46776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.778 [2024-07-15 16:06:54.688800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:54.688821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:46784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.778 [2024-07-15 16:06:54.688835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:54.688856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.778 [2024-07-15 16:06:54.688894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:54.688918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:46800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.778 [2024-07-15 16:06:54.688933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:54.688953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:46808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.778 [2024-07-15 16:06:54.688983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:54.689006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.778 [2024-07-15 16:06:54.689021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:54.689042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:46824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.778 [2024-07-15 16:06:54.689057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:54.689079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:46832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.778 [2024-07-15 16:06:54.689094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:54.689115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:46840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.778 [2024-07-15 16:06:54.689129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:54.689150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.778 [2024-07-15 16:06:54.689165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:54.689185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:46856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.778 [2024-07-15 16:06:54.689200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:54.689220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:46864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.778 [2024-07-15 16:06:54.689235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:54.689255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.778 [2024-07-15 16:06:54.689270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:54.689291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:46880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.778 [2024-07-15 16:06:54.689305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:54.689325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:46888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.778 [2024-07-15 16:06:54.689340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:54.689371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:46896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.778 [2024-07-15 16:06:54.689388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:54.689409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:46904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.778 [2024-07-15 16:06:54.689423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:54.689444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.778 [2024-07-15 16:06:54.689459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:54.689480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:46920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.778 [2024-07-15 16:06:54.689494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:54.689515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.778 [2024-07-15 16:06:54.689529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:54.689550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.778 [2024-07-15 16:06:54.689565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:54.689586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.778 [2024-07-15 16:06:54.689601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:54.689622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:46952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.778 [2024-07-15 16:06:54.689636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:54.689658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:46960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.778 [2024-07-15 16:06:54.689673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:34.778 [2024-07-15 16:06:54.690279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:46968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.778 [2024-07-15 16:06:54.690309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.690339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:46976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.779 [2024-07-15 16:06:54.690357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.690381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.779 [2024-07-15 16:06:54.690396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.690434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:46992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.779 [2024-07-15 16:06:54.690450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.690474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:47000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.779 [2024-07-15 16:06:54.690489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.690512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.779 [2024-07-15 16:06:54.690527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.690550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:47016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.779 [2024-07-15 16:06:54.690566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.690590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.779 [2024-07-15 16:06:54.690604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.690628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:46584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.779 [2024-07-15 16:06:54.690643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.690666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:46592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.779 [2024-07-15 16:06:54.690681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.690704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:46600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.779 [2024-07-15 16:06:54.690719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.690742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:46608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.779 [2024-07-15 16:06:54.690757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.690780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:46616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.779 [2024-07-15 16:06:54.690795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.690819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:46624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.779 [2024-07-15 16:06:54.690834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.690857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:46632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.779 [2024-07-15 16:06:54.690872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.690895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.779 [2024-07-15 16:06:54.690917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.690941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:46648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.779 [2024-07-15 16:06:54.690971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.691000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:47032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.779 [2024-07-15 16:06:54.691016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.691041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:47040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.779 [2024-07-15 16:06:54.691056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.691079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.779 [2024-07-15 16:06:54.691094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.691118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.779 [2024-07-15 16:06:54.691132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.691155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.779 [2024-07-15 16:06:54.691170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.691193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:47072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.779 [2024-07-15 16:06:54.691208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.691231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:47080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.779 [2024-07-15 16:06:54.691245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.691268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.779 [2024-07-15 16:06:54.691283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.691306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.779 [2024-07-15 16:06:54.691332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.691355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.779 [2024-07-15 16:06:54.691370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.691393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.779 [2024-07-15 16:06:54.691415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.691440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.779 [2024-07-15 16:06:54.691455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.691479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.779 [2024-07-15 16:06:54.691493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.691517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.779 [2024-07-15 16:06:54.691532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.691555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.779 [2024-07-15 16:06:54.691569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.691593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.779 [2024-07-15 16:06:54.691608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.691632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:47160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.779 [2024-07-15 16:06:54.691647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.691670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.779 [2024-07-15 16:06:54.691685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.691709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.779 [2024-07-15 16:06:54.691724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.691747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.779 [2024-07-15 16:06:54.691762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.691785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:47192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.779 [2024-07-15 16:06:54.691800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.691823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:47200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.779 [2024-07-15 16:06:54.691838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.691861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:47208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.779 [2024-07-15 16:06:54.691877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.691907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:47216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.779 [2024-07-15 16:06:54.691923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.691946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:47224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.779 [2024-07-15 16:06:54.691972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:34.779 [2024-07-15 16:06:54.691998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.692013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.692036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.692051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.692074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.692089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.692112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.692126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.692149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.692165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.692188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.692203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.692225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:47280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.692241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.692265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.692280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.692435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.692458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.692490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.692507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.692544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.692626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.692657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.692678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.692705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:47328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.692720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.692747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.692762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.692795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:47344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.692811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.692839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.692854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.692881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.692896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.692924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.692939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.692980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:47376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.692998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.693026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.693041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.693068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.693083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.693110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.693126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.693153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.693177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.693205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:47416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.693220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.693247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.693262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.693289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.693304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.693340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:47440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.693355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.693382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:47448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.693396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.693423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:47456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.693438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.693465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:47464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.693480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.693507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:47472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.693531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.693565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:47480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.693580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.693607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:47488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.693622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.693649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.693664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.693691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:47504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.693712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.693741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:47512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.693756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.693783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.693798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.693825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:47528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.693840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.693867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:47536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.693882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.693923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:47544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.693939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.693979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.693998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.694026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:47560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.780 [2024-07-15 16:06:54.694042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:34.780 [2024-07-15 16:06:54.694068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:47568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 16:06:54.694083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 16:06:54.694110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 16:06:54.694125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 16:06:54.694152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 16:06:54.694167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 16:06:54.694194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:47592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 16:06:54.694209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 16:06:54.694237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:46656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.781 [2024-07-15 16:06:54.694257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 16:06:54.694301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.781 [2024-07-15 16:06:54.694318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 16:06:54.694345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.781 [2024-07-15 16:06:54.694360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 16:06:54.694387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:46680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.781 [2024-07-15 16:06:54.694402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 16:06:54.694429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.781 [2024-07-15 16:06:54.694444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 16:06:54.694471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:46696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.781 [2024-07-15 16:06:54.694486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 16:06:54.694514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.781 [2024-07-15 16:06:54.694529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 16:07:07.985067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 16:07:07.985117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 16:07:07.985178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 16:07:07.985199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 16:07:07.985221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 16:07:07.985236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 16:07:07.985257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 16:07:07.985271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 16:07:07.985292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 16:07:07.985306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 16:07:07.985327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 16:07:07.985341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 16:07:07.985387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 16:07:07.985403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 16:07:07.985424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 16:07:07.985438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 16:07:07.985807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 16:07:07.985832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 16:07:07.985851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 16:07:07.985866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 16:07:07.985881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 16:07:07.985907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 16:07:07.985924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 16:07:07.985938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 16:07:07.985953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 16:07:07.985984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 16:07:07.986001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 16:07:07.986014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 16:07:07.986029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 16:07:07.986042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 16:07:07.986057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 16:07:07.986070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 16:07:07.986085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 16:07:07.986099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 16:07:07.986114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 16:07:07.986128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 16:07:07.986143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 16:07:07.986167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 16:07:07.986185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 16:07:07.986198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 16:07:07.986213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 16:07:07.986227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 16:07:07.986258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 16:07:07.986271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 16:07:07.986285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 16:07:07.986314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 16:07:07.986330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 16:07:07.986343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 16:07:07.986366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 16:07:07.986387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 16:07:07.986403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 16:07:07.986416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 16:07:07.986431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 16:07:07.986444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.986459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:94808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 16:07:07.986473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.986488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 16:07:07.986501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.986516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 16:07:07.986529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.986544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 16:07:07.986557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.986572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 16:07:07.986592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.986608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 16:07:07.986622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.986637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 16:07:07.986651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.986666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 16:07:07.986680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.986695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:94872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 16:07:07.986708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.986723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 16:07:07.986736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.986751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 16:07:07.986765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.986779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 16:07:07.986793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.986808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:94904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 16:07:07.986822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.986837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:94912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 16:07:07.986851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.986866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 16:07:07.986879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.986895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 16:07:07.986909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.986924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 16:07:07.986937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.986959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 16:07:07.986972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.986999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 16:07:07.987015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.987031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 16:07:07.987044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.987059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 16:07:07.987073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.987088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.782 [2024-07-15 16:07:07.987102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.987118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.782 [2024-07-15 16:07:07.987131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.987147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:94296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.782 [2024-07-15 16:07:07.987160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.987175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.782 [2024-07-15 16:07:07.987189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.987204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.782 [2024-07-15 16:07:07.987218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.987233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:94320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.782 [2024-07-15 16:07:07.987247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.987262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:94328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.782 [2024-07-15 16:07:07.987275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.987291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:94336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.782 [2024-07-15 16:07:07.987304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.987319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:94344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.782 [2024-07-15 16:07:07.987339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.987355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:94352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.782 [2024-07-15 16:07:07.987368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.987384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.782 [2024-07-15 16:07:07.987397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.987412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.782 [2024-07-15 16:07:07.987426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.987441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.782 [2024-07-15 16:07:07.987454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.987470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:94384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.782 [2024-07-15 16:07:07.987483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.987498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:94392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.782 [2024-07-15 16:07:07.987512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.987527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:94400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.782 [2024-07-15 16:07:07.987541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.987556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:94408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.782 [2024-07-15 16:07:07.987570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.987585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:94416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.782 [2024-07-15 16:07:07.987598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.987614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.782 [2024-07-15 16:07:07.987637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.987653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.782 [2024-07-15 16:07:07.987666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.987681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:94440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.782 [2024-07-15 16:07:07.987695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 16:07:07.987716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:94448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.783 [2024-07-15 16:07:07.987730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.987745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.783 [2024-07-15 16:07:07.987758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.987774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.783 [2024-07-15 16:07:07.987787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.987802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.783 [2024-07-15 16:07:07.987815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.987831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.783 [2024-07-15 16:07:07.987845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.987860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.783 [2024-07-15 16:07:07.987873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.987889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.783 [2024-07-15 16:07:07.987904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.987919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:94504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.783 [2024-07-15 16:07:07.987933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.987948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.783 [2024-07-15 16:07:07.987973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.987989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:94520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.783 [2024-07-15 16:07:07.988003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.988018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.783 [2024-07-15 16:07:07.988031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.988047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:94976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 16:07:07.988061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.988076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 16:07:07.988090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.988111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 16:07:07.988130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.988146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 16:07:07.988159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.988175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 16:07:07.988188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.988203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 16:07:07.988217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.988241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 16:07:07.988254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.988269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 16:07:07.988282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.988297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 16:07:07.988310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.988325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 16:07:07.988339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.988354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 16:07:07.988367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.988382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 16:07:07.988395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.988410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 16:07:07.988424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.988439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 16:07:07.988453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.988468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 16:07:07.988487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.988503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 16:07:07.988516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.988531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 16:07:07.988545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.988560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 16:07:07.988574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.988589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 16:07:07.988607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.988622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 16:07:07.988636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.988651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 16:07:07.988664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.988679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 16:07:07.988693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.988708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 16:07:07.988721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.988736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 16:07:07.988750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.988765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 16:07:07.988778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.988793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 16:07:07.988807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.988822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 16:07:07.988835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.988855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 16:07:07.988869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.988885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 16:07:07.988898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.988913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 16:07:07.988926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.988942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 16:07:07.988966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.988984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 16:07:07.988998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 16:07:07.989013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 16:07:07.989027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 16:07:07.989042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 16:07:07.989056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 16:07:07.989071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 16:07:07.989089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 16:07:07.989105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 16:07:07.989118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 16:07:07.989133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 16:07:07.989146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 16:07:07.989161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 16:07:07.989175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 16:07:07.989191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 16:07:07.989204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 16:07:07.989220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 16:07:07.989239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 16:07:07.989255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:94536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.784 [2024-07-15 16:07:07.989269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 16:07:07.989285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.784 [2024-07-15 16:07:07.989298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 16:07:07.989313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.784 [2024-07-15 16:07:07.989327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 16:07:07.989342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.784 [2024-07-15 16:07:07.989355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 16:07:07.989371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.784 [2024-07-15 16:07:07.989384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 16:07:07.989399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.784 [2024-07-15 16:07:07.989412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 16:07:07.989428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.784 [2024-07-15 16:07:07.989441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 16:07:07.989456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 16:07:07.989469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 16:07:07.989751] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1010500 was disconnected and freed. reset controller. 00:21:34.784 [2024-07-15 16:07:07.991418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.784 [2024-07-15 16:07:07.991508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.784 [2024-07-15 16:07:07.991532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 16:07:07.991567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11dc4d0 (9): Bad file descriptor 00:21:34.784 [2024-07-15 16:07:07.991915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.784 [2024-07-15 16:07:07.991947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc4d0 with addr=10.0.0.2, port=4421 00:21:34.784 [2024-07-15 16:07:07.991982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc4d0 is same with the state(5) to be set 00:21:34.784 [2024-07-15 16:07:07.992493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11dc4d0 (9): Bad file descriptor 00:21:34.784 [2024-07-15 16:07:07.992752] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.784 [2024-07-15 16:07:07.992778] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.784 [2024-07-15 16:07:07.992794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.784 [2024-07-15 16:07:07.993024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.784 [2024-07-15 16:07:07.993049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.784 [2024-07-15 16:07:18.076653] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:34.784 Received shutdown signal, test time was about 55.309900 seconds 00:21:34.784 00:21:34.784 Latency(us) 00:21:34.784 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.784 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:34.784 Verification LBA range: start 0x0 length 0x4000 00:21:34.784 Nvme0n1 : 55.31 7465.60 29.16 0.00 0.00 17112.34 346.30 7046430.72 00:21:34.784 =================================================================================================================== 00:21:34.784 Total : 7465.60 29.16 0.00 0.00 17112.34 346.30 7046430.72 00:21:34.784 16:07:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:35.043 16:07:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:21:35.043 16:07:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:35.043 16:07:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:21:35.043 16:07:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:35.043 16:07:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:21:35.043 16:07:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:35.043 16:07:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:21:35.043 16:07:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:35.043 16:07:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:35.043 rmmod nvme_tcp 00:21:35.043 rmmod nvme_fabrics 00:21:35.043 rmmod nvme_keyring 00:21:35.043 16:07:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:35.043 16:07:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:21:35.043 16:07:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:21:35.043 16:07:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 95262 ']' 00:21:35.043 16:07:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 95262 00:21:35.043 16:07:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 95262 ']' 00:21:35.043 16:07:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 95262 00:21:35.043 16:07:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:21:35.043 16:07:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:35.043 16:07:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95262 00:21:35.043 killing process with pid 95262 00:21:35.043 16:07:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:35.043 16:07:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:35.043 16:07:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95262' 00:21:35.043 16:07:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 95262 00:21:35.043 16:07:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 95262 00:21:35.301 16:07:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:35.301 16:07:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:35.301 16:07:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:35.301 16:07:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:35.301 16:07:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:35.301 16:07:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.301 16:07:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:35.301 16:07:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.559 16:07:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:35.560 00:21:35.560 real 1m1.285s 00:21:35.560 user 2m53.682s 00:21:35.560 sys 0m13.471s 00:21:35.560 16:07:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:35.560 16:07:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:35.560 ************************************ 00:21:35.560 END TEST nvmf_host_multipath 00:21:35.560 ************************************ 00:21:35.560 16:07:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:35.560 16:07:29 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:21:35.560 16:07:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:35.560 16:07:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:35.560 16:07:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:35.560 ************************************ 00:21:35.560 START TEST nvmf_timeout 00:21:35.560 ************************************ 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:21:35.560 * Looking for test storage... 00:21:35.560 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:35.560 Cannot find device "nvmf_tgt_br" 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:35.560 Cannot find device "nvmf_tgt_br2" 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:35.560 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:35.818 Cannot find device "nvmf_tgt_br" 00:21:35.818 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:21:35.818 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:35.818 Cannot find device "nvmf_tgt_br2" 00:21:35.818 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:21:35.818 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:35.818 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:35.818 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:35.818 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:35.818 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:21:35.818 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:35.818 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:35.818 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:21:35.818 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:35.818 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:35.818 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:35.818 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:35.818 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:35.818 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:35.818 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:35.818 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:35.818 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:35.818 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:35.818 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:35.818 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:35.818 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:35.818 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:35.818 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:35.818 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:35.818 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:35.818 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:35.818 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:35.818 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:35.818 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:35.818 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:35.818 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:35.818 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:35.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:35.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:21:35.818 00:21:35.818 --- 10.0.0.2 ping statistics --- 00:21:35.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.818 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:21:35.818 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:35.818 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:35.818 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:21:35.818 00:21:35.818 --- 10.0.0.3 ping statistics --- 00:21:35.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.818 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:21:35.818 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:36.075 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:36.075 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:21:36.075 00:21:36.075 --- 10.0.0.1 ping statistics --- 00:21:36.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.075 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:21:36.075 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:36.075 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:21:36.075 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:36.075 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:36.075 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:36.075 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:36.075 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:36.075 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:36.075 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:36.075 16:07:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:21:36.075 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:36.075 16:07:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:36.075 16:07:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:36.075 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=96625 00:21:36.075 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:36.076 16:07:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 96625 00:21:36.076 16:07:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96625 ']' 00:21:36.076 16:07:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.076 16:07:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:36.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:36.076 16:07:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.076 16:07:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:36.076 16:07:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:36.076 [2024-07-15 16:07:29.630180] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:21:36.076 [2024-07-15 16:07:29.630338] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:36.076 [2024-07-15 16:07:29.764720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:36.333 [2024-07-15 16:07:29.878862] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:36.333 [2024-07-15 16:07:29.878925] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:36.333 [2024-07-15 16:07:29.878936] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:36.333 [2024-07-15 16:07:29.878944] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:36.333 [2024-07-15 16:07:29.878951] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:36.333 [2024-07-15 16:07:29.879126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:36.333 [2024-07-15 16:07:29.879138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.897 16:07:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:36.897 16:07:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:21:36.897 16:07:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:36.897 16:07:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:36.897 16:07:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:37.155 16:07:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:37.155 16:07:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:37.155 16:07:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:37.412 [2024-07-15 16:07:30.904592] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:37.412 16:07:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:37.668 Malloc0 00:21:37.668 16:07:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:37.973 16:07:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:38.231 16:07:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:38.231 [2024-07-15 16:07:31.940543] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:38.489 16:07:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:21:38.489 16:07:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=96717 00:21:38.489 16:07:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 96717 /var/tmp/bdevperf.sock 00:21:38.489 16:07:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96717 ']' 00:21:38.489 16:07:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:38.489 16:07:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:38.489 16:07:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:38.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:38.489 16:07:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:38.489 16:07:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:38.489 [2024-07-15 16:07:32.006451] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:21:38.489 [2024-07-15 16:07:32.006553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96717 ] 00:21:38.489 [2024-07-15 16:07:32.141378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.746 [2024-07-15 16:07:32.250269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:39.311 16:07:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:39.311 16:07:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:21:39.311 16:07:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:39.569 16:07:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:21:39.827 NVMe0n1 00:21:39.827 16:07:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:39.827 16:07:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=96766 00:21:39.827 16:07:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:21:40.085 Running I/O for 10 seconds... 00:21:41.045 16:07:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:41.045 [2024-07-15 16:07:34.738657] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.045 [2024-07-15 16:07:34.738731] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.045 [2024-07-15 16:07:34.738744] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.045 [2024-07-15 16:07:34.738752] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.045 [2024-07-15 16:07:34.738761] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.045 [2024-07-15 16:07:34.738770] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.045 [2024-07-15 16:07:34.738779] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.045 [2024-07-15 16:07:34.738787] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.738796] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.738804] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.738813] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.738821] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.738829] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.738838] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.738846] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.738854] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.738862] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.738871] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.738879] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.738887] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.738895] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.738904] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.738913] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.738921] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.738929] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.738938] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.738946] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.738954] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.738977] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.738986] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.738994] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.739002] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.739011] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.739020] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.739028] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.739036] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.739044] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.739053] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.739061] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.739069] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.739077] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.739084] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.739093] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.739101] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.739114] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.739124] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.739132] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.739140] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.739148] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.739156] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.739165] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.739173] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.739181] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.739189] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.739197] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.739205] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.739213] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.739221] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.739229] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.739237] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.739245] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.739253] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.739261] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.739270] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.739278] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.739286] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.739296] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e358e0 is same with the state(5) to be set 00:21:41.046 [2024-07-15 16:07:34.740591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:84616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.046 [2024-07-15 16:07:34.740642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.046 [2024-07-15 16:07:34.740672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:84624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.046 [2024-07-15 16:07:34.740686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.046 [2024-07-15 16:07:34.740700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:84632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.046 [2024-07-15 16:07:34.740710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.046 [2024-07-15 16:07:34.740723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.046 [2024-07-15 16:07:34.740734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.046 [2024-07-15 16:07:34.740747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:84648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.046 [2024-07-15 16:07:34.740757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.046 [2024-07-15 16:07:34.740770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:84656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.046 [2024-07-15 16:07:34.740780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.046 [2024-07-15 16:07:34.740793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.046 [2024-07-15 16:07:34.740803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.046 [2024-07-15 16:07:34.740816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.046 [2024-07-15 16:07:34.740826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.046 [2024-07-15 16:07:34.740839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:85056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.046 [2024-07-15 16:07:34.740850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.046 [2024-07-15 16:07:34.740863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:85064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.046 [2024-07-15 16:07:34.740873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.046 [2024-07-15 16:07:34.741396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:85072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.046 [2024-07-15 16:07:34.741410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.046 [2024-07-15 16:07:34.741423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:85080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.046 [2024-07-15 16:07:34.741433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.046 [2024-07-15 16:07:34.741448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:85088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.046 [2024-07-15 16:07:34.741459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.046 [2024-07-15 16:07:34.741471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.046 [2024-07-15 16:07:34.741481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.046 [2024-07-15 16:07:34.741494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:85104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.046 [2024-07-15 16:07:34.741505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.741518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:85112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.047 [2024-07-15 16:07:34.741927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.741944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:85120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.047 [2024-07-15 16:07:34.741974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.742000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:85128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.047 [2024-07-15 16:07:34.742013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.742026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:85136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.047 [2024-07-15 16:07:34.742036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.742048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:85144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.047 [2024-07-15 16:07:34.742058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.742071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:85152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.047 [2024-07-15 16:07:34.742082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.742208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.047 [2024-07-15 16:07:34.742222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.742235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:85168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.047 [2024-07-15 16:07:34.742246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.742639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.047 [2024-07-15 16:07:34.742668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.742682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:85184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.047 [2024-07-15 16:07:34.742693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.742707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:85192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.047 [2024-07-15 16:07:34.742718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.742730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:85200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.047 [2024-07-15 16:07:34.742740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.742753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.047 [2024-07-15 16:07:34.742766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.743021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.047 [2024-07-15 16:07:34.743047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.743347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:85224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.047 [2024-07-15 16:07:34.743369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.743383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:85232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.047 [2024-07-15 16:07:34.743395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.743408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:85240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.047 [2024-07-15 16:07:34.743419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.743431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:85248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.047 [2024-07-15 16:07:34.743442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.743456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.047 [2024-07-15 16:07:34.743466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.743706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:85264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.047 [2024-07-15 16:07:34.743720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.743733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.047 [2024-07-15 16:07:34.744171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.744206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:85280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.047 [2024-07-15 16:07:34.744219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.744233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:84680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.047 [2024-07-15 16:07:34.744243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.744256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:84688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.047 [2024-07-15 16:07:34.744268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.744280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:84696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.047 [2024-07-15 16:07:34.744291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.744303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:84704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.047 [2024-07-15 16:07:34.744314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.744558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.047 [2024-07-15 16:07:34.744574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.744588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:84720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.047 [2024-07-15 16:07:34.744872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.744902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:84728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.047 [2024-07-15 16:07:34.744915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.744928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:84736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.047 [2024-07-15 16:07:34.744939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.744952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:84744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.047 [2024-07-15 16:07:34.744991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.745291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:84752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.047 [2024-07-15 16:07:34.745321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.745337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:84760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.047 [2024-07-15 16:07:34.745348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.745361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.047 [2024-07-15 16:07:34.745372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.745386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:84776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.047 [2024-07-15 16:07:34.745397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.745409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:84784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.047 [2024-07-15 16:07:34.745420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.745648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:84792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.047 [2024-07-15 16:07:34.745677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.745692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:84800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.047 [2024-07-15 16:07:34.745702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.745842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:84808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.047 [2024-07-15 16:07:34.745854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.746118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.047 [2024-07-15 16:07:34.746145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.746160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:84824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.047 [2024-07-15 16:07:34.746171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.746184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:84832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.047 [2024-07-15 16:07:34.746194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.047 [2024-07-15 16:07:34.746207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.048 [2024-07-15 16:07:34.746218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.746230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.048 [2024-07-15 16:07:34.746241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.746399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.048 [2024-07-15 16:07:34.746541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.746665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:84864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.048 [2024-07-15 16:07:34.746679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.746830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:84872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.048 [2024-07-15 16:07:34.746943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.746983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:84880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.048 [2024-07-15 16:07:34.747001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.747015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:84888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.048 [2024-07-15 16:07:34.747025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.747038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:84896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.048 [2024-07-15 16:07:34.747049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.747062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.048 [2024-07-15 16:07:34.747333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.747430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:84912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.048 [2024-07-15 16:07:34.747443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.747457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:84920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.048 [2024-07-15 16:07:34.747468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.747482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:84928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.048 [2024-07-15 16:07:34.747492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.747505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:84936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.048 [2024-07-15 16:07:34.747515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.747528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:85288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.048 [2024-07-15 16:07:34.747539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.747689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:85296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.048 [2024-07-15 16:07:34.747702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.747972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:85304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.048 [2024-07-15 16:07:34.747997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.748012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:85312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.048 [2024-07-15 16:07:34.748023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.748035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.048 [2024-07-15 16:07:34.748046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.748058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:85328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.048 [2024-07-15 16:07:34.748069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.748353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:85336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.048 [2024-07-15 16:07:34.748448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.748464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.048 [2024-07-15 16:07:34.748475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.748489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:85352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.048 [2024-07-15 16:07:34.748500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.748513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.048 [2024-07-15 16:07:34.748523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.748535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.048 [2024-07-15 16:07:34.748553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.748836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:85376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.048 [2024-07-15 16:07:34.748920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.748936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.048 [2024-07-15 16:07:34.748947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.748981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.048 [2024-07-15 16:07:34.749270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.749502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:85400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.048 [2024-07-15 16:07:34.749525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.749540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.048 [2024-07-15 16:07:34.749550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.749796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:85416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.048 [2024-07-15 16:07:34.749809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.749822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.048 [2024-07-15 16:07:34.749832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.749845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:85432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.048 [2024-07-15 16:07:34.750117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.750137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.048 [2024-07-15 16:07:34.750419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.750449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:85448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.048 [2024-07-15 16:07:34.750461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.750474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.048 [2024-07-15 16:07:34.750486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.750499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.048 [2024-07-15 16:07:34.750748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.750776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.048 [2024-07-15 16:07:34.750789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.750802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.048 [2024-07-15 16:07:34.750813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.750825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.048 [2024-07-15 16:07:34.751091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.751110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.048 [2024-07-15 16:07:34.751122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.751136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.048 [2024-07-15 16:07:34.751357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.751384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.048 [2024-07-15 16:07:34.751396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.048 [2024-07-15 16:07:34.751409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.048 [2024-07-15 16:07:34.751420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.049 [2024-07-15 16:07:34.751433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.049 [2024-07-15 16:07:34.751701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.049 [2024-07-15 16:07:34.751725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.049 [2024-07-15 16:07:34.751737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.049 [2024-07-15 16:07:34.751750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.049 [2024-07-15 16:07:34.751761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.049 [2024-07-15 16:07:34.751773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.049 [2024-07-15 16:07:34.751910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.049 [2024-07-15 16:07:34.752164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.049 [2024-07-15 16:07:34.752180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.049 [2024-07-15 16:07:34.752193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.049 [2024-07-15 16:07:34.752203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.049 [2024-07-15 16:07:34.752330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.049 [2024-07-15 16:07:34.752464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.049 [2024-07-15 16:07:34.752494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.049 [2024-07-15 16:07:34.752742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.049 [2024-07-15 16:07:34.752768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.049 [2024-07-15 16:07:34.753020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.049 [2024-07-15 16:07:34.753050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.049 [2024-07-15 16:07:34.753064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.049 [2024-07-15 16:07:34.753077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.049 [2024-07-15 16:07:34.753096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.049 [2024-07-15 16:07:34.753109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.049 [2024-07-15 16:07:34.753501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.049 [2024-07-15 16:07:34.753644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.049 [2024-07-15 16:07:34.753740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.049 [2024-07-15 16:07:34.753763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:41.049 [2024-07-15 16:07:34.753775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.049 [2024-07-15 16:07:34.753788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:84944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.049 [2024-07-15 16:07:34.754055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.049 [2024-07-15 16:07:34.754087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:84952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.049 [2024-07-15 16:07:34.754100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.049 [2024-07-15 16:07:34.754113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:84960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.049 [2024-07-15 16:07:34.754123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.049 [2024-07-15 16:07:34.754136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:84968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.049 [2024-07-15 16:07:34.754262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.049 [2024-07-15 16:07:34.754406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:84976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.049 [2024-07-15 16:07:34.754514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.049 [2024-07-15 16:07:34.754532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.049 [2024-07-15 16:07:34.754543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.049 [2024-07-15 16:07:34.754556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.049 [2024-07-15 16:07:34.754567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.049 [2024-07-15 16:07:34.754710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:85000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.049 [2024-07-15 16:07:34.754836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.049 [2024-07-15 16:07:34.755100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:85008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.049 [2024-07-15 16:07:34.755126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.049 [2024-07-15 16:07:34.755142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:85016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.049 [2024-07-15 16:07:34.755153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.049 [2024-07-15 16:07:34.755166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.049 [2024-07-15 16:07:34.755176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.049 [2024-07-15 16:07:34.755305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:85032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.049 [2024-07-15 16:07:34.755567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.049 [2024-07-15 16:07:34.755723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:85040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.049 [2024-07-15 16:07:34.755800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.049 [2024-07-15 16:07:34.755842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:41.049 [2024-07-15 16:07:34.755862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:41.049 [2024-07-15 16:07:34.756110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85048 len:8 PRP1 0x0 PRP2 0x0 00:21:41.049 [2024-07-15 16:07:34.756125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.049 [2024-07-15 16:07:34.756186] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x100a8d0 was disconnected and freed. reset controller. 00:21:41.049 [2024-07-15 16:07:34.756299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.049 [2024-07-15 16:07:34.756317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.049 [2024-07-15 16:07:34.756330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.049 [2024-07-15 16:07:34.756340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.049 [2024-07-15 16:07:34.756351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.049 [2024-07-15 16:07:34.756361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.049 [2024-07-15 16:07:34.756371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.049 [2024-07-15 16:07:34.756381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.049 [2024-07-15 16:07:34.756392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9d240 is same with the state(5) to be set 00:21:41.049 [2024-07-15 16:07:34.756612] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:41.049 [2024-07-15 16:07:34.756638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf9d240 (9): Bad file descriptor 00:21:41.049 [2024-07-15 16:07:34.756743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:41.049 [2024-07-15 16:07:34.756768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf9d240 with addr=10.0.0.2, port=4420 00:21:41.049 [2024-07-15 16:07:34.756781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9d240 is same with the state(5) to be set 00:21:41.049 [2024-07-15 16:07:34.756802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf9d240 (9): Bad file descriptor 00:21:41.049 [2024-07-15 16:07:34.756821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:41.049 [2024-07-15 16:07:34.756832] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:41.049 [2024-07-15 16:07:34.756843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:41.049 [2024-07-15 16:07:34.756866] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:41.049 [2024-07-15 16:07:34.756879] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:41.049 16:07:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:21:43.578 [2024-07-15 16:07:36.757204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-07-15 16:07:36.757299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf9d240 with addr=10.0.0.2, port=4420 00:21:43.578 [2024-07-15 16:07:36.757319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9d240 is same with the state(5) to be set 00:21:43.578 [2024-07-15 16:07:36.757351] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf9d240 (9): Bad file descriptor 00:21:43.578 [2024-07-15 16:07:36.757390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:43.578 [2024-07-15 16:07:36.757406] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:43.578 [2024-07-15 16:07:36.757418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:43.578 [2024-07-15 16:07:36.757450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:43.578 [2024-07-15 16:07:36.757465] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:43.578 16:07:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:21:43.579 16:07:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:21:43.579 16:07:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:43.579 16:07:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:21:43.579 16:07:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:21:43.579 16:07:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:21:43.579 16:07:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:21:43.579 16:07:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:21:43.579 16:07:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:21:45.476 [2024-07-15 16:07:38.757613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.476 [2024-07-15 16:07:38.757685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf9d240 with addr=10.0.0.2, port=4420 00:21:45.476 [2024-07-15 16:07:38.757704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9d240 is same with the state(5) to be set 00:21:45.476 [2024-07-15 16:07:38.757735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf9d240 (9): Bad file descriptor 00:21:45.476 [2024-07-15 16:07:38.757763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:45.476 [2024-07-15 16:07:38.757775] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:45.476 [2024-07-15 16:07:38.757786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:45.476 [2024-07-15 16:07:38.757817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:45.476 [2024-07-15 16:07:38.757832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.392 [2024-07-15 16:07:40.757998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.392 [2024-07-15 16:07:40.758074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.392 [2024-07-15 16:07:40.758089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.392 [2024-07-15 16:07:40.758100] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:21:47.392 [2024-07-15 16:07:40.758133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.348 00:21:48.348 Latency(us) 00:21:48.348 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.348 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:48.348 Verification LBA range: start 0x0 length 0x4000 00:21:48.348 NVMe0n1 : 8.20 1290.51 5.04 15.62 0.00 98064.19 2293.76 7046430.72 00:21:48.348 =================================================================================================================== 00:21:48.348 Total : 1290.51 5.04 15.62 0.00 98064.19 2293.76 7046430.72 00:21:48.348 0 00:21:48.606 16:07:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:21:48.606 16:07:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:48.606 16:07:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:21:48.864 16:07:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:21:48.864 16:07:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:21:48.864 16:07:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:21:48.864 16:07:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:21:49.123 16:07:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:21:49.123 16:07:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 96766 00:21:49.123 16:07:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 96717 00:21:49.123 16:07:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96717 ']' 00:21:49.123 16:07:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96717 00:21:49.123 16:07:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:21:49.123 16:07:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:49.123 16:07:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96717 00:21:49.381 16:07:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:49.381 16:07:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:49.381 killing process with pid 96717 00:21:49.381 16:07:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96717' 00:21:49.381 16:07:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96717 00:21:49.381 Received shutdown signal, test time was about 9.290148 seconds 00:21:49.381 00:21:49.381 Latency(us) 00:21:49.381 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:49.381 =================================================================================================================== 00:21:49.381 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:49.381 16:07:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96717 00:21:49.381 16:07:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:49.639 [2024-07-15 16:07:43.292852] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.639 16:07:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:21:49.639 16:07:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=96918 00:21:49.639 16:07:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 96918 /var/tmp/bdevperf.sock 00:21:49.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:49.639 16:07:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96918 ']' 00:21:49.639 16:07:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:49.639 16:07:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:49.639 16:07:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:49.639 16:07:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:49.639 16:07:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:49.639 [2024-07-15 16:07:43.362701] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:21:49.639 [2024-07-15 16:07:43.363111] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96918 ] 00:21:49.896 [2024-07-15 16:07:43.499460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.896 [2024-07-15 16:07:43.613247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:50.829 16:07:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:50.829 16:07:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:21:50.829 16:07:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:51.087 16:07:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:21:51.346 NVMe0n1 00:21:51.346 16:07:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=96970 00:21:51.346 16:07:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:51.346 16:07:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:21:51.346 Running I/O for 10 seconds... 00:21:52.279 16:07:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:52.540 [2024-07-15 16:07:46.144329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:87120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.540 [2024-07-15 16:07:46.144941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.540 [2024-07-15 16:07:46.145418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:87128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.540 [2024-07-15 16:07:46.145849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.540 [2024-07-15 16:07:46.146285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:87136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.540 [2024-07-15 16:07:46.146731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.540 [2024-07-15 16:07:46.147188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:87144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.540 [2024-07-15 16:07:46.147606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.540 [2024-07-15 16:07:46.147636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.540 [2024-07-15 16:07:46.147649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.147663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:87160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.541 [2024-07-15 16:07:46.147674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.147687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:87168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.541 [2024-07-15 16:07:46.147698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.147710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.541 [2024-07-15 16:07:46.147721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.147734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:87184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.541 [2024-07-15 16:07:46.147745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.147759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:87192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.541 [2024-07-15 16:07:46.147769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.147782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:87200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.541 [2024-07-15 16:07:46.147793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.147806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:87208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.541 [2024-07-15 16:07:46.147817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.147831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:87216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.541 [2024-07-15 16:07:46.147841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.147854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:87224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.541 [2024-07-15 16:07:46.147865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.147878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:87232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.541 [2024-07-15 16:07:46.147888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.147901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:87240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.541 [2024-07-15 16:07:46.147912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.147925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:87248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.541 [2024-07-15 16:07:46.147937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.147950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:87256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.541 [2024-07-15 16:07:46.147978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.147993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:87264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.541 [2024-07-15 16:07:46.148004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.148017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.541 [2024-07-15 16:07:46.148028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.148041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:87280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.541 [2024-07-15 16:07:46.148052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.148065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:87288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.541 [2024-07-15 16:07:46.148076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.148089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:87296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.541 [2024-07-15 16:07:46.148100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.148113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:87304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.541 [2024-07-15 16:07:46.148124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.148136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:87312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.541 [2024-07-15 16:07:46.148147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.148160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:86496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.541 [2024-07-15 16:07:46.148171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.148184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:86504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.541 [2024-07-15 16:07:46.148194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.148207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.541 [2024-07-15 16:07:46.148218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.148230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:86520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.541 [2024-07-15 16:07:46.148241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.148256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:86528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.541 [2024-07-15 16:07:46.148267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.148280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:86536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.541 [2024-07-15 16:07:46.148290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.148303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.541 [2024-07-15 16:07:46.148314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.148326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:87320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.541 [2024-07-15 16:07:46.148337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.148361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:87328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.541 [2024-07-15 16:07:46.148371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.148384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:87336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.541 [2024-07-15 16:07:46.148395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.148408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:87344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.541 [2024-07-15 16:07:46.148419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.148432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:87352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.541 [2024-07-15 16:07:46.148444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.148458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.541 [2024-07-15 16:07:46.148469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.148481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:87368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.541 [2024-07-15 16:07:46.148492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.148505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.541 [2024-07-15 16:07:46.148516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.148529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:87384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.541 [2024-07-15 16:07:46.148540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.148552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:87392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.541 [2024-07-15 16:07:46.148564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.148577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.541 [2024-07-15 16:07:46.148588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.148601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:87408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.541 [2024-07-15 16:07:46.148612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.148624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.541 [2024-07-15 16:07:46.148635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.148647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:87424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.541 [2024-07-15 16:07:46.148658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.148671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.541 [2024-07-15 16:07:46.148681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.541 [2024-07-15 16:07:46.148694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:87440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.541 [2024-07-15 16:07:46.148704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.148718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.542 [2024-07-15 16:07:46.148729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.148742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:86552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.542 [2024-07-15 16:07:46.148752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.148765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:86560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.542 [2024-07-15 16:07:46.148776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.148788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:86568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.542 [2024-07-15 16:07:46.148799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.148812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:86576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.542 [2024-07-15 16:07:46.148823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.148836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:86584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.542 [2024-07-15 16:07:46.148847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.148861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:86592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.542 [2024-07-15 16:07:46.148872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.148885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:86600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.542 [2024-07-15 16:07:46.148896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.148909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:86608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.542 [2024-07-15 16:07:46.148928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.148942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:86616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.542 [2024-07-15 16:07:46.148953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.150265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:86624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.542 [2024-07-15 16:07:46.150652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.151083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:86632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.542 [2024-07-15 16:07:46.151488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.151891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:86640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.542 [2024-07-15 16:07:46.152118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.152140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:86648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.542 [2024-07-15 16:07:46.152154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.152167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:86656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.542 [2024-07-15 16:07:46.152178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.152191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:86664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.542 [2024-07-15 16:07:46.152201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.152214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:86672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.542 [2024-07-15 16:07:46.152225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.152237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:86680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.542 [2024-07-15 16:07:46.152248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.152260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:86688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.542 [2024-07-15 16:07:46.152271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.152284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.542 [2024-07-15 16:07:46.152294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.152307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:86704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.542 [2024-07-15 16:07:46.152318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.152331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:86712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.542 [2024-07-15 16:07:46.152342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.152355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:86720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.542 [2024-07-15 16:07:46.152365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.152379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:86728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.542 [2024-07-15 16:07:46.152389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.152402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:86736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.542 [2024-07-15 16:07:46.152413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.152428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:86744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.542 [2024-07-15 16:07:46.152439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.152451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:86752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.542 [2024-07-15 16:07:46.152463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.152476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:86760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.542 [2024-07-15 16:07:46.152487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.152499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:86768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.542 [2024-07-15 16:07:46.152510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.152523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:86776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.542 [2024-07-15 16:07:46.152533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.152546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:86784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.542 [2024-07-15 16:07:46.152557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.152569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:86792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.542 [2024-07-15 16:07:46.152580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.152593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:86800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.542 [2024-07-15 16:07:46.152603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.152616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:87456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.542 [2024-07-15 16:07:46.152627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.152640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.542 [2024-07-15 16:07:46.152650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.152663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.542 [2024-07-15 16:07:46.152673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.152686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:87480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.542 [2024-07-15 16:07:46.152697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.152711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:87488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.542 [2024-07-15 16:07:46.152723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.152736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:87496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.542 [2024-07-15 16:07:46.152746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.152759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.542 [2024-07-15 16:07:46.152770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.152783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:86808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.542 [2024-07-15 16:07:46.152794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.152806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:86816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.542 [2024-07-15 16:07:46.152817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.542 [2024-07-15 16:07:46.152830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:86824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.543 [2024-07-15 16:07:46.152841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.152853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:86832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.543 [2024-07-15 16:07:46.152864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.152876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:86840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.543 [2024-07-15 16:07:46.152886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.152899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:86848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.543 [2024-07-15 16:07:46.152910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.152922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.543 [2024-07-15 16:07:46.152933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.152945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.543 [2024-07-15 16:07:46.152970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.152987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:86872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.543 [2024-07-15 16:07:46.152999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.153012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:86880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.543 [2024-07-15 16:07:46.153023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.153036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:86888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.543 [2024-07-15 16:07:46.153048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.153062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:86896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.543 [2024-07-15 16:07:46.153073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.153086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.543 [2024-07-15 16:07:46.153097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.153110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:86912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.543 [2024-07-15 16:07:46.153120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.153133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:86920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.543 [2024-07-15 16:07:46.153143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.153156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:86928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.543 [2024-07-15 16:07:46.153166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.153179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:86936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.543 [2024-07-15 16:07:46.153189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.153202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:86944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.543 [2024-07-15 16:07:46.153212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.153224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:86952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.543 [2024-07-15 16:07:46.153235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.153247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:86960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.543 [2024-07-15 16:07:46.153258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.153270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:86968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.543 [2024-07-15 16:07:46.153281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.153293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:86976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.543 [2024-07-15 16:07:46.153304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.153316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:86984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.543 [2024-07-15 16:07:46.153326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.153340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:86992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.543 [2024-07-15 16:07:46.153351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.153363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:87000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.543 [2024-07-15 16:07:46.153374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.153386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:87008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.543 [2024-07-15 16:07:46.153396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.153409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.543 [2024-07-15 16:07:46.153419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.153432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:87024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.543 [2024-07-15 16:07:46.153442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.153460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:87032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.543 [2024-07-15 16:07:46.153472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.153485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:87040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.543 [2024-07-15 16:07:46.153496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.153509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:87048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.543 [2024-07-15 16:07:46.153519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.153532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:87512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.543 [2024-07-15 16:07:46.153543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.153555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.543 [2024-07-15 16:07:46.153566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.153579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.543 [2024-07-15 16:07:46.153589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.153602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:87072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.543 [2024-07-15 16:07:46.153613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.153625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:87080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.543 [2024-07-15 16:07:46.153637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.153650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:87088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.543 [2024-07-15 16:07:46.153660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.153673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:87096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.543 [2024-07-15 16:07:46.153684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.153697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:87104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.543 [2024-07-15 16:07:46.153708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.153720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23468d0 is same with the state(5) to be set 00:21:52.543 [2024-07-15 16:07:46.153736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:52.543 [2024-07-15 16:07:46.153745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:52.543 [2024-07-15 16:07:46.153755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87112 len:8 PRP1 0x0 PRP2 0x0 00:21:52.543 [2024-07-15 16:07:46.153765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.153823] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23468d0 was disconnected and freed. reset controller. 00:21:52.543 [2024-07-15 16:07:46.153969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.543 [2024-07-15 16:07:46.153990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.154003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.543 [2024-07-15 16:07:46.154013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.543 [2024-07-15 16:07:46.154030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.543 [2024-07-15 16:07:46.154042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.544 [2024-07-15 16:07:46.154053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.544 [2024-07-15 16:07:46.154064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.544 [2024-07-15 16:07:46.154082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d9240 is same with the state(5) to be set 00:21:52.544 [2024-07-15 16:07:46.154303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:52.544 [2024-07-15 16:07:46.154331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d9240 (9): Bad file descriptor 00:21:52.544 [2024-07-15 16:07:46.154991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:52.544 [2024-07-15 16:07:46.155029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d9240 with addr=10.0.0.2, port=4420 00:21:52.544 [2024-07-15 16:07:46.155044] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d9240 is same with the state(5) to be set 00:21:52.544 [2024-07-15 16:07:46.155069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d9240 (9): Bad file descriptor 00:21:52.544 [2024-07-15 16:07:46.155091] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:52.544 [2024-07-15 16:07:46.155102] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:52.544 [2024-07-15 16:07:46.155115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:52.544 [2024-07-15 16:07:46.155139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:52.544 [2024-07-15 16:07:46.155154] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:52.544 16:07:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:21:53.478 [2024-07-15 16:07:47.155338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:53.478 [2024-07-15 16:07:47.155871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d9240 with addr=10.0.0.2, port=4420 00:21:53.478 [2024-07-15 16:07:47.156354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d9240 is same with the state(5) to be set 00:21:53.478 [2024-07-15 16:07:47.156806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d9240 (9): Bad file descriptor 00:21:53.478 [2024-07-15 16:07:47.156842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:53.478 [2024-07-15 16:07:47.156855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:53.478 [2024-07-15 16:07:47.156867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:53.478 [2024-07-15 16:07:47.156899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:53.478 [2024-07-15 16:07:47.156915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:53.478 16:07:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:53.737 [2024-07-15 16:07:47.419573] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:53.737 16:07:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 96970 00:21:54.669 [2024-07-15 16:07:48.170390] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:01.295 00:22:01.295 Latency(us) 00:22:01.295 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.295 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:01.295 Verification LBA range: start 0x0 length 0x4000 00:22:01.295 NVMe0n1 : 10.01 6486.04 25.34 0.00 0.00 19701.19 1980.97 3035150.89 00:22:01.295 =================================================================================================================== 00:22:01.295 Total : 6486.04 25.34 0.00 0.00 19701.19 1980.97 3035150.89 00:22:01.295 0 00:22:01.295 16:07:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=97087 00:22:01.295 16:07:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:01.295 16:07:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:22:01.553 Running I/O for 10 seconds... 00:22:02.484 16:07:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:02.744 [2024-07-15 16:07:56.269949] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270195] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270212] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270221] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270231] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270240] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270249] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270263] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270271] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270280] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270289] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270297] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270305] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270313] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270322] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270330] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270338] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270346] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270354] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270363] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270372] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270380] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270388] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270397] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270405] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270413] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270421] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270430] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270439] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270448] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270456] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270465] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270473] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270481] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270491] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270500] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270508] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270517] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270525] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270533] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270542] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270550] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270558] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270566] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270574] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270583] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270591] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270599] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270608] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270616] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270625] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270633] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270641] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270649] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270657] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.744 [2024-07-15 16:07:56.270666] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.270674] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.270683] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.270691] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.270699] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.270707] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.270717] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.270726] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.270734] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.270743] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.270751] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.270760] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.270768] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.270776] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.270785] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.270793] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.270801] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.270809] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.270817] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.270825] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.270833] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.270841] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.270849] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.270857] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.270865] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.270873] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.270881] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.270889] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.270898] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.270906] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.270915] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.270923] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.270931] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.270939] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.270948] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.271249] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.271335] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.271433] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.271498] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.271561] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.271619] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.271776] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.271884] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.271899] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.271908] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.271916] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.271924] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.271933] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.271942] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.271950] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.271973] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.271983] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.271992] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.272000] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.272009] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.272018] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.272026] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.272035] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.272043] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.272052] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.272060] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.272068] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.272076] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.272085] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.272093] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.272101] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.272109] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.272117] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.272125] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.272135] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.272144] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8c680 is same with the state(5) to be set 00:22:02.745 [2024-07-15 16:07:56.273204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.745 [2024-07-15 16:07:56.273238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.745 [2024-07-15 16:07:56.273262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.745 [2024-07-15 16:07:56.273275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.745 [2024-07-15 16:07:56.273289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:80192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.745 [2024-07-15 16:07:56.273301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.745 [2024-07-15 16:07:56.273314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.745 [2024-07-15 16:07:56.273325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.745 [2024-07-15 16:07:56.273337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:80208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.745 [2024-07-15 16:07:56.273348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.745 [2024-07-15 16:07:56.273360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:80216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.745 [2024-07-15 16:07:56.273371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.745 [2024-07-15 16:07:56.273384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.745 [2024-07-15 16:07:56.273394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.745 [2024-07-15 16:07:56.273407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:80232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.745 [2024-07-15 16:07:56.273417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.745 [2024-07-15 16:07:56.273430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.745 [2024-07-15 16:07:56.273441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.745 [2024-07-15 16:07:56.273453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.273464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.273476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.273487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.273500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:80264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.273510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.273525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.273536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.273549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:80280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.273560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.273573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.273584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.273597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.273607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.273627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:80304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.273638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.273651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.273662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.273675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:80320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.273685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.273698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:80328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.273709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.273722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:80336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.273733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.273745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.273756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.273769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:80352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.273780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.273792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:80360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.273803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.273816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.273827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.273840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:80376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.273851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.273864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.273875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.273888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.273914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.273928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.273938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.273952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:80408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.273987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.274003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.274014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.274027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.274038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.274051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.274062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.274074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.274085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.274098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.274109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.274122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.274133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.274146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.274157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.274171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.274182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.274195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.274210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.274223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.274234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.274246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:80496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.274257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.274270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.274281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.274293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.274304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.274317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:80520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.274328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.274341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:80528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.274351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.274364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:80536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.274380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.274393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:80544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.274404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.274417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.274428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.274441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.274451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.274464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.274476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.274489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.274500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.746 [2024-07-15 16:07:56.274512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.746 [2024-07-15 16:07:56.274523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.274536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.747 [2024-07-15 16:07:56.274547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.274560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.747 [2024-07-15 16:07:56.274570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.274582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:80608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.747 [2024-07-15 16:07:56.274594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.274607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.747 [2024-07-15 16:07:56.274618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.274631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.747 [2024-07-15 16:07:56.274641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.274654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.747 [2024-07-15 16:07:56.274665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.274677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.747 [2024-07-15 16:07:56.274688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.274700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.747 [2024-07-15 16:07:56.274711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.274723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.747 [2024-07-15 16:07:56.274734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.274747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.747 [2024-07-15 16:07:56.274763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.274777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.747 [2024-07-15 16:07:56.274788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.274801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.747 [2024-07-15 16:07:56.274811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.274823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.747 [2024-07-15 16:07:56.274834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.274847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.747 [2024-07-15 16:07:56.274859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.274871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.747 [2024-07-15 16:07:56.274882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.274894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.747 [2024-07-15 16:07:56.274905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.274918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.747 [2024-07-15 16:07:56.274928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.274940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.747 [2024-07-15 16:07:56.274951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.274977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.747 [2024-07-15 16:07:56.274989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.275002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.747 [2024-07-15 16:07:56.275013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.275026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.747 [2024-07-15 16:07:56.275036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.275049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.747 [2024-07-15 16:07:56.275059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.275072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.747 [2024-07-15 16:07:56.275082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.275095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.747 [2024-07-15 16:07:56.275105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.275118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.747 [2024-07-15 16:07:56.275128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.276013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.747 [2024-07-15 16:07:56.276437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.276864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.747 [2024-07-15 16:07:56.277282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.277709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.747 [2024-07-15 16:07:56.278158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.278585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.747 [2024-07-15 16:07:56.279013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.279438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.747 [2024-07-15 16:07:56.279865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.280286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.747 [2024-07-15 16:07:56.280712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.281141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.747 [2024-07-15 16:07:56.281231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.281754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.747 [2024-07-15 16:07:56.281772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.281785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.747 [2024-07-15 16:07:56.281796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.281908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.747 [2024-07-15 16:07:56.281922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.281944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.747 [2024-07-15 16:07:56.281972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.281990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.747 [2024-07-15 16:07:56.282001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.282013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.747 [2024-07-15 16:07:56.282024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.282036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.747 [2024-07-15 16:07:56.282048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.282060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.747 [2024-07-15 16:07:56.282071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.282083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.747 [2024-07-15 16:07:56.282094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.282106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.747 [2024-07-15 16:07:56.282117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.747 [2024-07-15 16:07:56.282130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.748 [2024-07-15 16:07:56.282141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.748 [2024-07-15 16:07:56.282154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.748 [2024-07-15 16:07:56.282164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.748 [2024-07-15 16:07:56.282177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.748 [2024-07-15 16:07:56.282187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.748 [2024-07-15 16:07:56.282202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.748 [2024-07-15 16:07:56.282213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.748 [2024-07-15 16:07:56.282226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.748 [2024-07-15 16:07:56.282237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.748 [2024-07-15 16:07:56.282249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.748 [2024-07-15 16:07:56.282260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.748 [2024-07-15 16:07:56.282272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.748 [2024-07-15 16:07:56.282283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.748 [2024-07-15 16:07:56.282296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.748 [2024-07-15 16:07:56.282307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.748 [2024-07-15 16:07:56.282319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.748 [2024-07-15 16:07:56.282330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.748 [2024-07-15 16:07:56.282343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.748 [2024-07-15 16:07:56.282353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.748 [2024-07-15 16:07:56.282366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.748 [2024-07-15 16:07:56.282377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.748 [2024-07-15 16:07:56.282390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.748 [2024-07-15 16:07:56.282400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.748 [2024-07-15 16:07:56.282413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.748 [2024-07-15 16:07:56.282424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.748 [2024-07-15 16:07:56.282436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.748 [2024-07-15 16:07:56.282447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.748 [2024-07-15 16:07:56.282459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.748 [2024-07-15 16:07:56.282470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.748 [2024-07-15 16:07:56.282482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.748 [2024-07-15 16:07:56.282493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.748 [2024-07-15 16:07:56.282506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.748 [2024-07-15 16:07:56.282516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.748 [2024-07-15 16:07:56.282529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.748 [2024-07-15 16:07:56.282539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.748 [2024-07-15 16:07:56.282552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.748 [2024-07-15 16:07:56.282562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.748 [2024-07-15 16:07:56.282574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.748 [2024-07-15 16:07:56.282585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.748 [2024-07-15 16:07:56.282597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.748 [2024-07-15 16:07:56.282607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.748 [2024-07-15 16:07:56.282619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.748 [2024-07-15 16:07:56.282630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.748 [2024-07-15 16:07:56.282862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.748 [2024-07-15 16:07:56.282879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.748 [2024-07-15 16:07:56.282892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.748 [2024-07-15 16:07:56.283145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.748 [2024-07-15 16:07:56.283178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.748 [2024-07-15 16:07:56.283192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.748 [2024-07-15 16:07:56.283204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.748 [2024-07-15 16:07:56.283215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.748 [2024-07-15 16:07:56.283227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.748 [2024-07-15 16:07:56.283238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.748 [2024-07-15 16:07:56.283279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:02.748 [2024-07-15 16:07:56.283294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81144 len:8 PRP1 0x0 PRP2 0x0 00:22:02.748 [2024-07-15 16:07:56.283658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.748 [2024-07-15 16:07:56.283682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:02.748 [2024-07-15 16:07:56.283692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:02.748 [2024-07-15 16:07:56.283813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81152 len:8 PRP1 0x0 PRP2 0x0 00:22:02.748 [2024-07-15 16:07:56.283835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.748 [2024-07-15 16:07:56.283973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:02.748 [2024-07-15 16:07:56.284084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:02.748 [2024-07-15 16:07:56.284097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81160 len:8 PRP1 0x0 PRP2 0x0 00:22:02.748 [2024-07-15 16:07:56.284108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.748 [2024-07-15 16:07:56.284120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:02.748 [2024-07-15 16:07:56.284129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:02.748 [2024-07-15 16:07:56.284137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81168 len:8 PRP1 0x0 PRP2 0x0 00:22:02.748 [2024-07-15 16:07:56.284147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.748 [2024-07-15 16:07:56.284158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:02.748 [2024-07-15 16:07:56.284166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:02.748 [2024-07-15 16:07:56.284174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81176 len:8 PRP1 0x0 PRP2 0x0 00:22:02.748 [2024-07-15 16:07:56.284183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.748 [2024-07-15 16:07:56.284194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:02.749 [2024-07-15 16:07:56.284202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:02.749 [2024-07-15 16:07:56.284465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81184 len:8 PRP1 0x0 PRP2 0x0 00:22:02.749 [2024-07-15 16:07:56.284569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.749 [2024-07-15 16:07:56.284583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:02.749 [2024-07-15 16:07:56.284592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:02.749 [2024-07-15 16:07:56.284601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81192 len:8 PRP1 0x0 PRP2 0x0 00:22:02.749 [2024-07-15 16:07:56.284721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.749 [2024-07-15 16:07:56.285014] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2356c00 was disconnected and freed. reset controller. 00:22:02.749 [2024-07-15 16:07:56.285136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:02.749 [2024-07-15 16:07:56.285242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.749 [2024-07-15 16:07:56.285258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:02.749 [2024-07-15 16:07:56.285269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.749 [2024-07-15 16:07:56.285389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:02.749 [2024-07-15 16:07:56.285403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.749 [2024-07-15 16:07:56.285548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:02.749 [2024-07-15 16:07:56.285567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.749 [2024-07-15 16:07:56.285672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d9240 is same with the state(5) to be set 00:22:02.749 [2024-07-15 16:07:56.286153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.749 [2024-07-15 16:07:56.286197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d9240 (9): Bad file descriptor 00:22:02.749 [2024-07-15 16:07:56.286313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.749 [2024-07-15 16:07:56.286339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d9240 with addr=10.0.0.2, port=4420 00:22:02.749 [2024-07-15 16:07:56.286353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d9240 is same with the state(5) to be set 00:22:02.749 [2024-07-15 16:07:56.286375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d9240 (9): Bad file descriptor 00:22:02.749 [2024-07-15 16:07:56.286395] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.749 [2024-07-15 16:07:56.286517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.749 [2024-07-15 16:07:56.286537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.749 [2024-07-15 16:07:56.286792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.749 [2024-07-15 16:07:56.286825] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.749 16:07:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:22:03.679 [2024-07-15 16:07:57.287004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.679 [2024-07-15 16:07:57.287087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d9240 with addr=10.0.0.2, port=4420 00:22:03.679 [2024-07-15 16:07:57.287106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d9240 is same with the state(5) to be set 00:22:03.679 [2024-07-15 16:07:57.287138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d9240 (9): Bad file descriptor 00:22:03.679 [2024-07-15 16:07:57.287163] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.679 [2024-07-15 16:07:57.287175] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.679 [2024-07-15 16:07:57.287187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.679 [2024-07-15 16:07:57.287219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.679 [2024-07-15 16:07:57.287234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.610 [2024-07-15 16:07:58.287411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.610 [2024-07-15 16:07:58.287496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d9240 with addr=10.0.0.2, port=4420 00:22:04.610 [2024-07-15 16:07:58.287514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d9240 is same with the state(5) to be set 00:22:04.610 [2024-07-15 16:07:58.287547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d9240 (9): Bad file descriptor 00:22:04.610 [2024-07-15 16:07:58.287571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.610 [2024-07-15 16:07:58.287584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.610 [2024-07-15 16:07:58.287597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.610 [2024-07-15 16:07:58.287628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.610 [2024-07-15 16:07:58.287643] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:05.983 [2024-07-15 16:07:59.291173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.983 [2024-07-15 16:07:59.291258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d9240 with addr=10.0.0.2, port=4420 00:22:05.983 [2024-07-15 16:07:59.291276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d9240 is same with the state(5) to be set 00:22:05.983 [2024-07-15 16:07:59.291551] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d9240 (9): Bad file descriptor 00:22:05.983 [2024-07-15 16:07:59.291838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:05.983 [2024-07-15 16:07:59.291854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:05.983 [2024-07-15 16:07:59.291867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:05.983 [2024-07-15 16:07:59.296104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:05.983 [2024-07-15 16:07:59.296141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:05.983 16:07:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:05.983 [2024-07-15 16:07:59.542729] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:05.983 16:07:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 97087 00:22:06.915 [2024-07-15 16:08:00.333247] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:12.172 00:22:12.172 Latency(us) 00:22:12.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.172 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:12.172 Verification LBA range: start 0x0 length 0x4000 00:22:12.172 NVMe0n1 : 10.01 5407.55 21.12 3568.00 0.00 14226.02 659.08 3035150.89 00:22:12.172 =================================================================================================================== 00:22:12.172 Total : 5407.55 21.12 3568.00 0.00 14226.02 0.00 3035150.89 00:22:12.172 0 00:22:12.172 16:08:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 96918 00:22:12.172 16:08:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96918 ']' 00:22:12.172 16:08:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96918 00:22:12.172 16:08:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:22:12.172 16:08:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:12.172 16:08:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96918 00:22:12.172 16:08:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:12.172 16:08:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:12.172 killing process with pid 96918 00:22:12.172 16:08:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96918' 00:22:12.172 16:08:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96918 00:22:12.172 Received shutdown signal, test time was about 10.000000 seconds 00:22:12.172 00:22:12.172 Latency(us) 00:22:12.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.172 =================================================================================================================== 00:22:12.172 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:12.172 16:08:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96918 00:22:12.172 16:08:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=97209 00:22:12.172 16:08:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:22:12.172 16:08:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 97209 /var/tmp/bdevperf.sock 00:22:12.172 16:08:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 97209 ']' 00:22:12.172 16:08:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:12.172 16:08:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:12.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:12.172 16:08:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:12.172 16:08:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:12.172 16:08:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:12.172 [2024-07-15 16:08:05.465030] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:22:12.172 [2024-07-15 16:08:05.465147] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97209 ] 00:22:12.172 [2024-07-15 16:08:05.599611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.172 [2024-07-15 16:08:05.697147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:12.737 16:08:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:12.737 16:08:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:22:12.737 16:08:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=97237 00:22:12.737 16:08:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97209 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:22:12.737 16:08:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:22:12.995 16:08:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:13.253 NVMe0n1 00:22:13.510 16:08:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=97285 00:22:13.511 16:08:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:13.511 16:08:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:22:13.511 Running I/O for 10 seconds... 00:22:14.444 16:08:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:14.704 [2024-07-15 16:08:08.225412] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.704 [2024-07-15 16:08:08.225481] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.704 [2024-07-15 16:08:08.225509] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.704 [2024-07-15 16:08:08.225517] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.704 [2024-07-15 16:08:08.225525] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.704 [2024-07-15 16:08:08.225534] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.704 [2024-07-15 16:08:08.225541] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.704 [2024-07-15 16:08:08.225550] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.704 [2024-07-15 16:08:08.225558] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.704 [2024-07-15 16:08:08.225566] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.704 [2024-07-15 16:08:08.225574] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.704 [2024-07-15 16:08:08.225582] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.704 [2024-07-15 16:08:08.225589] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225597] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225605] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225613] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225620] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225628] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225635] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225643] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225650] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225658] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225665] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225673] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225680] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225689] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225697] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225704] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225712] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225720] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225727] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225736] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225744] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225752] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225759] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225767] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225774] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225782] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225806] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225814] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225822] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225830] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225838] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225846] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225854] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225861] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225869] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225878] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225886] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225903] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225928] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225937] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225945] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225953] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225961] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225970] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.225990] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.226000] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.226008] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.226017] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.226025] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.226034] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.226042] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.226050] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.226059] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.226066] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.226074] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.226082] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.226090] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.226098] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.226106] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.226114] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.226129] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.226137] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.226145] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.226153] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.226161] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.226169] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.226177] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.226186] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.226194] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.226202] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.226210] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.226218] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.226226] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.226234] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.226243] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.226251] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.226262] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.226271] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.226280] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.226288] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.226297] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.226321] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.226329] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.226337] tcp.c:1663:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ff50 is same with the state(5) to be set 00:22:14.705 [2024-07-15 16:08:08.227635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:130424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.705 [2024-07-15 16:08:08.227695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.705 [2024-07-15 16:08:08.227721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:116024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.705 [2024-07-15 16:08:08.227751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.705 [2024-07-15 16:08:08.227765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:51440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.705 [2024-07-15 16:08:08.227776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.705 [2024-07-15 16:08:08.227789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:67824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.705 [2024-07-15 16:08:08.227799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.705 [2024-07-15 16:08:08.227811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:49816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.705 [2024-07-15 16:08:08.227822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.705 [2024-07-15 16:08:08.227834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:68544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.705 [2024-07-15 16:08:08.227844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.705 [2024-07-15 16:08:08.227857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:111288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.705 [2024-07-15 16:08:08.227867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.705 [2024-07-15 16:08:08.227879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:130888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.705 [2024-07-15 16:08:08.227890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.705 [2024-07-15 16:08:08.227902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.705 [2024-07-15 16:08:08.227912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.705 [2024-07-15 16:08:08.228393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:121056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.705 [2024-07-15 16:08:08.228475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.705 [2024-07-15 16:08:08.228496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:114536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.705 [2024-07-15 16:08:08.228507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.705 [2024-07-15 16:08:08.228520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:125536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.705 [2024-07-15 16:08:08.228530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.705 [2024-07-15 16:08:08.228542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:106904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.705 [2024-07-15 16:08:08.228552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.705 [2024-07-15 16:08:08.228685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:111200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.705 [2024-07-15 16:08:08.228815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.705 [2024-07-15 16:08:08.229077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.705 [2024-07-15 16:08:08.229108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.705 [2024-07-15 16:08:08.229125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:42192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.705 [2024-07-15 16:08:08.229136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.705 [2024-07-15 16:08:08.229149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:102088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.705 [2024-07-15 16:08:08.229160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.705 [2024-07-15 16:08:08.229172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.705 [2024-07-15 16:08:08.229183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.705 [2024-07-15 16:08:08.229196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.705 [2024-07-15 16:08:08.229207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.705 [2024-07-15 16:08:08.229219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:65160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.705 [2024-07-15 16:08:08.229577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.705 [2024-07-15 16:08:08.229595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:123328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.705 [2024-07-15 16:08:08.229607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.705 [2024-07-15 16:08:08.229619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.705 [2024-07-15 16:08:08.229630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.705 [2024-07-15 16:08:08.229643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.705 [2024-07-15 16:08:08.229654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.705 [2024-07-15 16:08:08.229666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:60696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.705 [2024-07-15 16:08:08.229677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.705 [2024-07-15 16:08:08.229689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:50160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.705 [2024-07-15 16:08:08.229954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.705 [2024-07-15 16:08:08.230093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:93464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.705 [2024-07-15 16:08:08.230105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.705 [2024-07-15 16:08:08.230119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:122448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.705 [2024-07-15 16:08:08.230129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.705 [2024-07-15 16:08:08.230142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:48000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.705 [2024-07-15 16:08:08.230152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.705 [2024-07-15 16:08:08.230165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:116776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.705 [2024-07-15 16:08:08.230175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.705 [2024-07-15 16:08:08.230187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:61352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.705 [2024-07-15 16:08:08.230198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.705 [2024-07-15 16:08:08.230210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:59904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.705 [2024-07-15 16:08:08.230452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.705 [2024-07-15 16:08:08.230476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:55880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.705 [2024-07-15 16:08:08.230753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.705 [2024-07-15 16:08:08.230879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:56040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.705 [2024-07-15 16:08:08.230895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.230909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.230920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.230933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:130296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.230944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.230969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:102248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.230984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.231389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:111104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.231504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.231523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.231534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.231546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.231558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.231571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:75376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.231582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.231594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:31960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.231605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.231950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:62968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.231980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.231994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.232005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.232017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:122976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.232028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.232041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.232052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.232064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.232189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.232435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:48936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.232461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.232477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.232488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.232501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:125664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.232512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.232525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.232536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.232548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:54392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.232559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.232907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:27792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.232924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.232937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:31832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.232948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.233081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:110992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.233100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.233520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.233641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.233658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:56400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.233668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.233681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:122400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.233692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.233704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:29816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.233715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.233727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:45944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.233988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.234375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:109072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.234403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.234421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.234432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.234445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.234455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.234471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.234482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.234494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.234830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.234856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:51120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.234869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.234881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.234892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.234905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.234915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.234928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.234938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.234950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:125384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.235205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.235334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:108280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.235481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.235768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:103872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.235882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.235900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.235911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.235924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:61624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.235935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.235947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.236076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.236100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.236366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.236500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.236516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.236528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.236800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.236883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.236899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.236912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.236922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.236935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.236945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.237095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.237226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.237244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:30728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.237255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.237268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.237405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.237675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:126896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.237777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.237795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:115528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.237806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.237819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:55240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.238101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.238135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:67720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.238148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.238160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:49576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.238171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.238424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:122280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.238453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.238468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:112240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.238479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.238492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.238623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.238759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.238778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.239058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.239147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.239165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:108488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.239177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.239191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.239201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.239434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:107104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.239448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.239461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:101328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.239473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.239485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.239759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.239899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.240121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.240147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.240159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.240171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:45536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.240182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.240443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:66816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.240472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.240487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:124032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.240498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.240510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.240520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.240772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.240798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.240939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.706 [2024-07-15 16:08:08.241192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.706 [2024-07-15 16:08:08.241212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:107696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.707 [2024-07-15 16:08:08.241223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.707 [2024-07-15 16:08:08.241471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:118184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.707 [2024-07-15 16:08:08.241501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.707 [2024-07-15 16:08:08.241517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.707 [2024-07-15 16:08:08.241528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.707 [2024-07-15 16:08:08.241540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.707 [2024-07-15 16:08:08.241675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.707 [2024-07-15 16:08:08.241819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.707 [2024-07-15 16:08:08.242048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.707 [2024-07-15 16:08:08.242066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:67056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.707 [2024-07-15 16:08:08.242077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.707 [2024-07-15 16:08:08.242090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:91424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.707 [2024-07-15 16:08:08.242101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.707 [2024-07-15 16:08:08.242248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.707 [2024-07-15 16:08:08.242334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.707 [2024-07-15 16:08:08.242349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:56808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.707 [2024-07-15 16:08:08.242361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.707 [2024-07-15 16:08:08.242373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:69608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.707 [2024-07-15 16:08:08.242384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.707 [2024-07-15 16:08:08.242528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.707 [2024-07-15 16:08:08.242614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.707 [2024-07-15 16:08:08.242630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.707 [2024-07-15 16:08:08.242642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.707 [2024-07-15 16:08:08.242654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:123576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.707 [2024-07-15 16:08:08.242665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.707 [2024-07-15 16:08:08.242677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:34064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.707 [2024-07-15 16:08:08.242946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.707 [2024-07-15 16:08:08.242989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:43664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.707 [2024-07-15 16:08:08.243128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.707 [2024-07-15 16:08:08.243209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:31480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.707 [2024-07-15 16:08:08.243222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.707 [2024-07-15 16:08:08.243235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.707 [2024-07-15 16:08:08.243246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.707 [2024-07-15 16:08:08.243260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.707 [2024-07-15 16:08:08.243518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.707 [2024-07-15 16:08:08.243548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.707 [2024-07-15 16:08:08.243560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.707 [2024-07-15 16:08:08.243574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:48440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.707 [2024-07-15 16:08:08.243707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.707 [2024-07-15 16:08:08.243836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.707 [2024-07-15 16:08:08.243851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.707 [2024-07-15 16:08:08.244090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.707 [2024-07-15 16:08:08.244115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.707 [2024-07-15 16:08:08.244126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119440 len:8 PRP1 0x0 PRP2 0x0 00:22:14.707 [2024-07-15 16:08:08.244136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.707 [2024-07-15 16:08:08.244450] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10908d0 was disconnected and freed. reset controller. 00:22:14.707 [2024-07-15 16:08:08.244716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.707 [2024-07-15 16:08:08.244748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.707 [2024-07-15 16:08:08.244762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.707 [2024-07-15 16:08:08.244772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.707 [2024-07-15 16:08:08.244783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.707 [2024-07-15 16:08:08.244793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.707 [2024-07-15 16:08:08.244803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.707 [2024-07-15 16:08:08.245105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.707 [2024-07-15 16:08:08.245120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023240 is same with the state(5) to be set 00:22:14.707 [2024-07-15 16:08:08.245610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:14.707 [2024-07-15 16:08:08.245683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1023240 (9): Bad file descriptor 00:22:14.707 [2024-07-15 16:08:08.245870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:14.707 [2024-07-15 16:08:08.246146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1023240 with addr=10.0.0.2, port=4420 00:22:14.707 [2024-07-15 16:08:08.246174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023240 is same with the state(5) to be set 00:22:14.707 [2024-07-15 16:08:08.246199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1023240 (9): Bad file descriptor 00:22:14.707 [2024-07-15 16:08:08.246219] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:14.707 [2024-07-15 16:08:08.246230] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:14.707 [2024-07-15 16:08:08.246243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:14.707 [2024-07-15 16:08:08.246267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:14.707 [2024-07-15 16:08:08.246280] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:14.707 16:08:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 97285 00:22:16.604 [2024-07-15 16:08:10.246505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:16.604 [2024-07-15 16:08:10.246589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1023240 with addr=10.0.0.2, port=4420 00:22:16.604 [2024-07-15 16:08:10.246609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023240 is same with the state(5) to be set 00:22:16.604 [2024-07-15 16:08:10.246641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1023240 (9): Bad file descriptor 00:22:16.604 [2024-07-15 16:08:10.246665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:16.604 [2024-07-15 16:08:10.246677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:16.604 [2024-07-15 16:08:10.246689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:16.604 [2024-07-15 16:08:10.246720] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:16.604 [2024-07-15 16:08:10.246734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:19.131 [2024-07-15 16:08:12.246934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:19.131 [2024-07-15 16:08:12.247026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1023240 with addr=10.0.0.2, port=4420 00:22:19.131 [2024-07-15 16:08:12.247045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023240 is same with the state(5) to be set 00:22:19.131 [2024-07-15 16:08:12.247075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1023240 (9): Bad file descriptor 00:22:19.131 [2024-07-15 16:08:12.247099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:19.131 [2024-07-15 16:08:12.247111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:19.131 [2024-07-15 16:08:12.247123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:19.131 [2024-07-15 16:08:12.247153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:19.131 [2024-07-15 16:08:12.247166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:20.535 [2024-07-15 16:08:14.247475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:20.535 [2024-07-15 16:08:14.247529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:20.535 [2024-07-15 16:08:14.247544] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:20.535 [2024-07-15 16:08:14.247555] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:22:20.535 [2024-07-15 16:08:14.247588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:21.908 00:22:21.908 Latency(us) 00:22:21.908 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.908 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:22:21.908 NVMe0n1 : 8.17 2566.61 10.03 15.67 0.00 49608.43 2472.49 7046430.72 00:22:21.908 =================================================================================================================== 00:22:21.908 Total : 2566.61 10.03 15.67 0.00 49608.43 2472.49 7046430.72 00:22:21.908 0 00:22:21.908 16:08:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:21.908 Attaching 5 probes... 00:22:21.908 1274.567321: reset bdev controller NVMe0 00:22:21.908 1274.695983: reconnect bdev controller NVMe0 00:22:21.908 3275.291271: reconnect delay bdev controller NVMe0 00:22:21.908 3275.344694: reconnect bdev controller NVMe0 00:22:21.908 5275.738787: reconnect delay bdev controller NVMe0 00:22:21.908 5275.777866: reconnect bdev controller NVMe0 00:22:21.908 7276.412128: reconnect delay bdev controller NVMe0 00:22:21.908 7276.434775: reconnect bdev controller NVMe0 00:22:21.908 16:08:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:22:21.908 16:08:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:22:21.908 16:08:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 97237 00:22:21.908 16:08:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:21.908 16:08:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 97209 00:22:21.908 16:08:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 97209 ']' 00:22:21.908 16:08:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 97209 00:22:21.908 16:08:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:22:21.908 16:08:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:21.908 16:08:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97209 00:22:21.908 16:08:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:21.908 16:08:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:21.909 killing process with pid 97209 00:22:21.909 16:08:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97209' 00:22:21.909 16:08:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 97209 00:22:21.909 Received shutdown signal, test time was about 8.223414 seconds 00:22:21.909 00:22:21.909 Latency(us) 00:22:21.909 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.909 =================================================================================================================== 00:22:21.909 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:21.909 16:08:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 97209 00:22:21.909 16:08:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:22.166 16:08:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:22:22.166 16:08:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:22:22.166 16:08:15 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:22.166 16:08:15 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:22:22.166 16:08:15 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:22.166 16:08:15 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:22:22.166 16:08:15 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:22.166 16:08:15 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:22.166 rmmod nvme_tcp 00:22:22.166 rmmod nvme_fabrics 00:22:22.166 rmmod nvme_keyring 00:22:22.425 16:08:15 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:22.425 16:08:15 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:22:22.425 16:08:15 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:22:22.425 16:08:15 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 96625 ']' 00:22:22.425 16:08:15 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 96625 00:22:22.425 16:08:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96625 ']' 00:22:22.425 16:08:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96625 00:22:22.425 16:08:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:22:22.425 16:08:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:22.425 16:08:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96625 00:22:22.425 16:08:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:22.425 16:08:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:22.425 16:08:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96625' 00:22:22.425 killing process with pid 96625 00:22:22.425 16:08:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96625 00:22:22.425 16:08:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96625 00:22:22.683 16:08:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:22.683 16:08:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:22.683 16:08:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:22.683 16:08:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:22.683 16:08:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:22.683 16:08:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.683 16:08:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:22.683 16:08:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.683 16:08:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:22.683 ************************************ 00:22:22.683 END TEST nvmf_timeout 00:22:22.683 ************************************ 00:22:22.683 00:22:22.683 real 0m47.117s 00:22:22.683 user 2m18.181s 00:22:22.683 sys 0m5.226s 00:22:22.683 16:08:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:22.683 16:08:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:22.683 16:08:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:22.683 16:08:16 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:22:22.684 16:08:16 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:22:22.684 16:08:16 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:22.684 16:08:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:22.684 16:08:16 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:22:22.684 ************************************ 00:22:22.684 END TEST nvmf_tcp 00:22:22.684 ************************************ 00:22:22.684 00:22:22.684 real 15m55.777s 00:22:22.684 user 42m22.532s 00:22:22.684 sys 3m27.403s 00:22:22.684 16:08:16 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:22.684 16:08:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:22.684 16:08:16 -- common/autotest_common.sh@1142 -- # return 0 00:22:22.684 16:08:16 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:22:22.684 16:08:16 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:22:22.684 16:08:16 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:22.684 16:08:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:22.684 16:08:16 -- common/autotest_common.sh@10 -- # set +x 00:22:22.684 ************************************ 00:22:22.684 START TEST spdkcli_nvmf_tcp 00:22:22.684 ************************************ 00:22:22.684 16:08:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:22:22.965 * Looking for test storage... 00:22:22.965 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:22:22.965 16:08:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:22:22.965 16:08:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:22:22.965 16:08:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:22:22.965 16:08:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:22.965 16:08:16 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=97508 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 97508 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 97508 ']' 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:22.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:22.966 16:08:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:22.966 [2024-07-15 16:08:16.523555] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:22:22.966 [2024-07-15 16:08:16.523669] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97508 ] 00:22:22.966 [2024-07-15 16:08:16.665007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:23.223 [2024-07-15 16:08:16.776520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.223 [2024-07-15 16:08:16.776535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.802 16:08:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:23.802 16:08:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:22:23.802 16:08:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:22:23.802 16:08:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:23.802 16:08:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:23.802 16:08:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:22:23.802 16:08:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:22:23.802 16:08:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:22:23.802 16:08:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:23.802 16:08:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:23.802 16:08:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:22:23.802 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:22:23.802 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:22:23.802 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:22:23.802 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:22:23.802 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:22:23.802 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:22:23.802 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:22:23.802 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:22:23.802 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:22:23.802 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:22:23.802 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:23.802 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:22:23.802 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:22:23.802 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:23.802 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:22:23.802 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:22:23.802 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:22:23.802 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:22:23.802 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:23.802 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:22:23.802 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:22:23.802 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:22:23.802 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:22:23.802 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:23.802 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:22:23.802 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:22:23.802 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:22:23.802 ' 00:22:27.077 [2024-07-15 16:08:20.091395] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:27.643 [2024-07-15 16:08:21.356352] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:22:30.171 [2024-07-15 16:08:23.713864] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:22:32.067 [2024-07-15 16:08:25.747203] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:22:33.964 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:22:33.964 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:22:33.964 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:22:33.964 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:22:33.964 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:22:33.964 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:22:33.964 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:22:33.964 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:22:33.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:22:33.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:22:33.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:22:33.964 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:33.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:22:33.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:22:33.965 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:33.965 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:22:33.965 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:22:33.965 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:22:33.965 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:22:33.965 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:33.965 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:22:33.965 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:22:33.965 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:22:33.965 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:22:33.965 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:33.965 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:22:33.965 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:22:33.965 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:22:33.965 16:08:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:22:33.965 16:08:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:33.965 16:08:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:33.965 16:08:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:22:33.965 16:08:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:33.965 16:08:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:33.965 16:08:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:22:33.965 16:08:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:22:34.266 16:08:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:22:34.266 16:08:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:22:34.266 16:08:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:22:34.266 16:08:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:34.266 16:08:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:34.266 16:08:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:22:34.266 16:08:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:34.266 16:08:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:34.266 16:08:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:22:34.266 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:22:34.266 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:22:34.266 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:22:34.266 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:22:34.266 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:22:34.266 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:22:34.266 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:22:34.266 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:22:34.266 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:22:34.266 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:22:34.266 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:22:34.266 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:22:34.266 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:22:34.266 ' 00:22:39.561 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:22:39.561 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:22:39.561 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:22:39.562 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:22:39.562 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:22:39.562 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:22:39.562 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:22:39.562 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:22:39.562 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:22:39.562 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:22:39.562 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:22:39.562 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:22:39.562 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:22:39.562 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:22:39.562 16:08:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:22:39.562 16:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:39.562 16:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:39.819 16:08:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 97508 00:22:39.819 16:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 97508 ']' 00:22:39.819 16:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 97508 00:22:39.819 16:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:22:39.819 16:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:39.819 16:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97508 00:22:39.819 killing process with pid 97508 00:22:39.819 16:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:39.819 16:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:39.819 16:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97508' 00:22:39.819 16:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 97508 00:22:39.819 16:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 97508 00:22:39.819 16:08:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:22:39.819 16:08:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:22:39.819 16:08:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 97508 ']' 00:22:39.819 16:08:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 97508 00:22:39.819 16:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 97508 ']' 00:22:39.819 16:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 97508 00:22:39.819 Process with pid 97508 is not found 00:22:39.819 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (97508) - No such process 00:22:39.819 16:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 97508 is not found' 00:22:39.819 16:08:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:22:39.819 16:08:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:22:39.819 16:08:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:22:39.819 00:22:39.819 real 0m17.181s 00:22:39.819 user 0m36.811s 00:22:39.819 sys 0m0.941s 00:22:39.819 16:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:39.819 ************************************ 00:22:39.819 END TEST spdkcli_nvmf_tcp 00:22:39.819 16:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:39.819 ************************************ 00:22:40.078 16:08:33 -- common/autotest_common.sh@1142 -- # return 0 00:22:40.078 16:08:33 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:22:40.078 16:08:33 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:40.078 16:08:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:40.078 16:08:33 -- common/autotest_common.sh@10 -- # set +x 00:22:40.078 ************************************ 00:22:40.078 START TEST nvmf_identify_passthru 00:22:40.078 ************************************ 00:22:40.078 16:08:33 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:22:40.078 * Looking for test storage... 00:22:40.078 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:40.078 16:08:33 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:40.078 16:08:33 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:40.078 16:08:33 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:40.078 16:08:33 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:40.078 16:08:33 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.078 16:08:33 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.078 16:08:33 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.078 16:08:33 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:22:40.078 16:08:33 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:40.078 16:08:33 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:40.078 16:08:33 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:40.078 16:08:33 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:40.078 16:08:33 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:40.078 16:08:33 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.078 16:08:33 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.078 16:08:33 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.078 16:08:33 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:22:40.078 16:08:33 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.078 16:08:33 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.078 16:08:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:40.078 16:08:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:40.078 Cannot find device "nvmf_tgt_br" 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@155 -- # true 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:40.078 Cannot find device "nvmf_tgt_br2" 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@156 -- # true 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:40.078 Cannot find device "nvmf_tgt_br" 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@158 -- # true 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:40.078 Cannot find device "nvmf_tgt_br2" 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@159 -- # true 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:40.078 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:40.336 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:40.336 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:40.336 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:22:40.336 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:40.336 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:40.336 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:22:40.336 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:40.336 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:40.336 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:40.336 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:40.336 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:40.337 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:40.337 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:40.337 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:40.337 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:40.337 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:40.337 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:40.337 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:40.337 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:40.337 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:40.337 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:40.337 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:40.337 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:40.337 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:40.337 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:40.337 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:40.337 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:40.337 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:40.337 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:40.337 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:40.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:40.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:22:40.337 00:22:40.337 --- 10.0.0.2 ping statistics --- 00:22:40.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.337 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:22:40.337 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:40.337 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:40.337 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:22:40.337 00:22:40.337 --- 10.0.0.3 ping statistics --- 00:22:40.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.337 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:22:40.337 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:40.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:40.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:22:40.337 00:22:40.337 --- 10.0.0.1 ping statistics --- 00:22:40.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.337 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:22:40.337 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:40.337 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@433 -- # return 0 00:22:40.337 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:40.337 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:40.337 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:40.337 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:40.337 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:40.337 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:40.337 16:08:33 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:40.337 16:08:34 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:22:40.337 16:08:34 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:40.337 16:08:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:40.337 16:08:34 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:22:40.337 16:08:34 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:22:40.337 16:08:34 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:22:40.337 16:08:34 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:22:40.337 16:08:34 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:22:40.337 16:08:34 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:22:40.337 16:08:34 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:22:40.337 16:08:34 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:22:40.337 16:08:34 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:22:40.337 16:08:34 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:40.603 16:08:34 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:22:40.603 16:08:34 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:22:40.603 16:08:34 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:22:40.603 16:08:34 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:22:40.603 16:08:34 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:22:40.603 16:08:34 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:22:40.603 16:08:34 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:22:40.603 16:08:34 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:22:40.603 16:08:34 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:22:40.603 16:08:34 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:22:40.603 16:08:34 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:22:40.603 16:08:34 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:22:40.870 16:08:34 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:22:40.870 16:08:34 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:22:40.870 16:08:34 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:40.870 16:08:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:40.870 16:08:34 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:22:40.870 16:08:34 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:40.870 16:08:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:40.870 16:08:34 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=97992 00:22:40.870 16:08:34 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:40.870 16:08:34 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:40.870 16:08:34 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 97992 00:22:40.870 16:08:34 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 97992 ']' 00:22:40.870 16:08:34 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.870 16:08:34 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:40.870 16:08:34 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.870 16:08:34 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:40.870 16:08:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:40.870 [2024-07-15 16:08:34.536312] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:22:40.870 [2024-07-15 16:08:34.536422] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:41.128 [2024-07-15 16:08:34.674079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:41.128 [2024-07-15 16:08:34.783147] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:41.128 [2024-07-15 16:08:34.783209] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:41.128 [2024-07-15 16:08:34.783221] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:41.128 [2024-07-15 16:08:34.783229] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:41.128 [2024-07-15 16:08:34.783236] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:41.128 [2024-07-15 16:08:34.783376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:41.128 [2024-07-15 16:08:34.783602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:41.128 [2024-07-15 16:08:34.783609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.128 [2024-07-15 16:08:34.784233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:42.061 16:08:35 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:42.061 16:08:35 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:22:42.061 16:08:35 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:22:42.061 16:08:35 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.061 16:08:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:42.061 16:08:35 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.061 16:08:35 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:22:42.061 16:08:35 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.061 16:08:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:42.061 [2024-07-15 16:08:35.599567] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:22:42.061 16:08:35 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.061 16:08:35 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:42.061 16:08:35 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.061 16:08:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:42.061 [2024-07-15 16:08:35.613570] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:42.061 16:08:35 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.061 16:08:35 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:22:42.061 16:08:35 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:42.061 16:08:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:42.061 16:08:35 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:22:42.061 16:08:35 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.061 16:08:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:42.061 Nvme0n1 00:22:42.061 16:08:35 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.061 16:08:35 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:22:42.061 16:08:35 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.061 16:08:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:42.061 16:08:35 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.061 16:08:35 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:42.061 16:08:35 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.061 16:08:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:42.061 16:08:35 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.061 16:08:35 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:42.061 16:08:35 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.061 16:08:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:42.061 [2024-07-15 16:08:35.748256] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:42.061 16:08:35 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.061 16:08:35 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:22:42.061 16:08:35 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.061 16:08:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:42.061 [ 00:22:42.061 { 00:22:42.061 "allow_any_host": true, 00:22:42.061 "hosts": [], 00:22:42.061 "listen_addresses": [], 00:22:42.061 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:42.061 "subtype": "Discovery" 00:22:42.061 }, 00:22:42.061 { 00:22:42.061 "allow_any_host": true, 00:22:42.061 "hosts": [], 00:22:42.061 "listen_addresses": [ 00:22:42.061 { 00:22:42.061 "adrfam": "IPv4", 00:22:42.061 "traddr": "10.0.0.2", 00:22:42.061 "trsvcid": "4420", 00:22:42.061 "trtype": "TCP" 00:22:42.061 } 00:22:42.061 ], 00:22:42.061 "max_cntlid": 65519, 00:22:42.061 "max_namespaces": 1, 00:22:42.061 "min_cntlid": 1, 00:22:42.061 "model_number": "SPDK bdev Controller", 00:22:42.061 "namespaces": [ 00:22:42.061 { 00:22:42.061 "bdev_name": "Nvme0n1", 00:22:42.061 "name": "Nvme0n1", 00:22:42.061 "nguid": "4E357CD9500D4AA4B4BF8780CDD47110", 00:22:42.061 "nsid": 1, 00:22:42.061 "uuid": "4e357cd9-500d-4aa4-b4bf-8780cdd47110" 00:22:42.061 } 00:22:42.061 ], 00:22:42.061 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:42.061 "serial_number": "SPDK00000000000001", 00:22:42.061 "subtype": "NVMe" 00:22:42.061 } 00:22:42.061 ] 00:22:42.061 16:08:35 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.061 16:08:35 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:42.061 16:08:35 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:22:42.061 16:08:35 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:22:42.320 16:08:35 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:22:42.320 16:08:35 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:42.320 16:08:35 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:22:42.320 16:08:35 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:22:42.578 16:08:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:22:42.578 16:08:36 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:22:42.578 16:08:36 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:22:42.578 16:08:36 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:42.578 16:08:36 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.578 16:08:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:42.578 16:08:36 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.578 16:08:36 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:22:42.578 16:08:36 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:22:42.578 16:08:36 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:42.578 16:08:36 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:22:42.578 16:08:36 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:42.578 16:08:36 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:22:42.578 16:08:36 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:42.578 16:08:36 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:42.578 rmmod nvme_tcp 00:22:42.578 rmmod nvme_fabrics 00:22:42.578 rmmod nvme_keyring 00:22:42.578 16:08:36 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:42.578 16:08:36 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:22:42.578 16:08:36 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:22:42.578 16:08:36 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 97992 ']' 00:22:42.578 16:08:36 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 97992 00:22:42.578 16:08:36 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 97992 ']' 00:22:42.578 16:08:36 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 97992 00:22:42.836 16:08:36 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:22:42.836 16:08:36 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:42.836 16:08:36 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97992 00:22:42.836 16:08:36 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:42.836 16:08:36 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:42.836 16:08:36 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97992' 00:22:42.836 killing process with pid 97992 00:22:42.836 16:08:36 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 97992 00:22:42.836 16:08:36 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 97992 00:22:42.836 16:08:36 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:42.836 16:08:36 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:42.836 16:08:36 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:42.836 16:08:36 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:42.836 16:08:36 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:42.836 16:08:36 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.836 16:08:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:42.836 16:08:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.094 16:08:36 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:43.094 00:22:43.094 real 0m3.011s 00:22:43.094 user 0m7.442s 00:22:43.094 sys 0m0.791s 00:22:43.094 16:08:36 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:43.094 ************************************ 00:22:43.094 END TEST nvmf_identify_passthru 00:22:43.094 ************************************ 00:22:43.094 16:08:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:43.094 16:08:36 -- common/autotest_common.sh@1142 -- # return 0 00:22:43.094 16:08:36 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:22:43.094 16:08:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:43.094 16:08:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:43.094 16:08:36 -- common/autotest_common.sh@10 -- # set +x 00:22:43.094 ************************************ 00:22:43.094 START TEST nvmf_dif 00:22:43.094 ************************************ 00:22:43.094 16:08:36 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:22:43.094 * Looking for test storage... 00:22:43.094 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:43.094 16:08:36 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:43.094 16:08:36 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:22:43.094 16:08:36 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:43.094 16:08:36 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:43.094 16:08:36 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:43.094 16:08:36 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:43.094 16:08:36 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:43.094 16:08:36 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:43.095 16:08:36 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:43.095 16:08:36 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:43.095 16:08:36 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:43.095 16:08:36 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.095 16:08:36 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.095 16:08:36 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.095 16:08:36 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:22:43.095 16:08:36 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:43.095 16:08:36 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:22:43.095 16:08:36 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:22:43.095 16:08:36 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:22:43.095 16:08:36 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:22:43.095 16:08:36 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.095 16:08:36 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:43.095 16:08:36 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:43.095 Cannot find device "nvmf_tgt_br" 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@155 -- # true 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:43.095 Cannot find device "nvmf_tgt_br2" 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@156 -- # true 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:43.095 Cannot find device "nvmf_tgt_br" 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@158 -- # true 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:43.095 Cannot find device "nvmf_tgt_br2" 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@159 -- # true 00:22:43.095 16:08:36 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:43.352 16:08:36 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:43.352 16:08:36 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:43.352 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:43.352 16:08:36 nvmf_dif -- nvmf/common.sh@162 -- # true 00:22:43.352 16:08:36 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:43.352 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:43.352 16:08:36 nvmf_dif -- nvmf/common.sh@163 -- # true 00:22:43.352 16:08:36 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:43.352 16:08:36 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:43.352 16:08:36 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:43.352 16:08:36 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:43.352 16:08:36 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:43.352 16:08:36 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:43.352 16:08:36 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:43.352 16:08:36 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:43.352 16:08:36 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:43.352 16:08:36 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:43.352 16:08:36 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:43.352 16:08:36 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:43.352 16:08:36 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:43.352 16:08:36 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:43.352 16:08:36 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:43.352 16:08:37 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:43.352 16:08:37 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:43.352 16:08:37 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:43.352 16:08:37 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:43.352 16:08:37 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:43.352 16:08:37 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:43.352 16:08:37 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:43.352 16:08:37 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:43.352 16:08:37 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:43.352 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:43.352 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:22:43.352 00:22:43.352 --- 10.0.0.2 ping statistics --- 00:22:43.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.352 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:22:43.352 16:08:37 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:43.352 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:43.352 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:22:43.352 00:22:43.352 --- 10.0.0.3 ping statistics --- 00:22:43.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.352 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:22:43.352 16:08:37 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:43.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:43.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:22:43.610 00:22:43.610 --- 10.0.0.1 ping statistics --- 00:22:43.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.610 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:22:43.610 16:08:37 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:43.610 16:08:37 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:22:43.610 16:08:37 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:22:43.610 16:08:37 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:43.868 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:43.868 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:43.868 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:43.868 16:08:37 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:43.868 16:08:37 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:43.868 16:08:37 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:43.868 16:08:37 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:43.868 16:08:37 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:43.868 16:08:37 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:43.868 16:08:37 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:22:43.868 16:08:37 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:22:43.868 16:08:37 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:43.868 16:08:37 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:43.868 16:08:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:43.868 16:08:37 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=98347 00:22:43.868 16:08:37 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:43.868 16:08:37 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 98347 00:22:43.868 16:08:37 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 98347 ']' 00:22:43.868 16:08:37 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.868 16:08:37 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:43.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.868 16:08:37 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.868 16:08:37 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:43.868 16:08:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:43.868 [2024-07-15 16:08:37.535055] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:22:43.868 [2024-07-15 16:08:37.535176] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.127 [2024-07-15 16:08:37.671658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.127 [2024-07-15 16:08:37.785946] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.127 [2024-07-15 16:08:37.786012] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.127 [2024-07-15 16:08:37.786025] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:44.127 [2024-07-15 16:08:37.786034] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:44.127 [2024-07-15 16:08:37.786042] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.127 [2024-07-15 16:08:37.786073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.062 16:08:38 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:45.062 16:08:38 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:22:45.062 16:08:38 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:45.062 16:08:38 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:45.062 16:08:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:45.062 16:08:38 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.062 16:08:38 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:22:45.062 16:08:38 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:22:45.062 16:08:38 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.062 16:08:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:45.062 [2024-07-15 16:08:38.588033] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.062 16:08:38 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.062 16:08:38 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:22:45.062 16:08:38 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:45.063 16:08:38 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:45.063 16:08:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:45.063 ************************************ 00:22:45.063 START TEST fio_dif_1_default 00:22:45.063 ************************************ 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:45.063 bdev_null0 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:45.063 [2024-07-15 16:08:38.632141] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:45.063 { 00:22:45.063 "params": { 00:22:45.063 "name": "Nvme$subsystem", 00:22:45.063 "trtype": "$TEST_TRANSPORT", 00:22:45.063 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.063 "adrfam": "ipv4", 00:22:45.063 "trsvcid": "$NVMF_PORT", 00:22:45.063 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.063 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.063 "hdgst": ${hdgst:-false}, 00:22:45.063 "ddgst": ${ddgst:-false} 00:22:45.063 }, 00:22:45.063 "method": "bdev_nvme_attach_controller" 00:22:45.063 } 00:22:45.063 EOF 00:22:45.063 )") 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:45.063 "params": { 00:22:45.063 "name": "Nvme0", 00:22:45.063 "trtype": "tcp", 00:22:45.063 "traddr": "10.0.0.2", 00:22:45.063 "adrfam": "ipv4", 00:22:45.063 "trsvcid": "4420", 00:22:45.063 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:45.063 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:45.063 "hdgst": false, 00:22:45.063 "ddgst": false 00:22:45.063 }, 00:22:45.063 "method": "bdev_nvme_attach_controller" 00:22:45.063 }' 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:45.063 16:08:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:45.322 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:45.322 fio-3.35 00:22:45.322 Starting 1 thread 00:22:57.517 00:22:57.517 filename0: (groupid=0, jobs=1): err= 0: pid=98433: Mon Jul 15 16:08:49 2024 00:22:57.517 read: IOPS=2858, BW=11.2MiB/s (11.7MB/s)(112MiB/10001msec) 00:22:57.517 slat (nsec): min=6212, max=51935, avg=8503.34, stdev=3166.04 00:22:57.517 clat (usec): min=376, max=42483, avg=1374.22, stdev=5976.34 00:22:57.517 lat (usec): min=383, max=42493, avg=1382.72, stdev=5976.37 00:22:57.517 clat percentiles (usec): 00:22:57.517 | 1.00th=[ 400], 5.00th=[ 424], 10.00th=[ 437], 20.00th=[ 449], 00:22:57.517 | 30.00th=[ 457], 40.00th=[ 465], 50.00th=[ 474], 60.00th=[ 482], 00:22:57.517 | 70.00th=[ 486], 80.00th=[ 498], 90.00th=[ 515], 95.00th=[ 529], 00:22:57.517 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:22:57.517 | 99.99th=[42730] 00:22:57.517 bw ( KiB/s): min= 5440, max=14560, per=99.17%, avg=11339.79, stdev=2414.75, samples=19 00:22:57.517 iops : min= 1360, max= 3640, avg=2834.95, stdev=603.69, samples=19 00:22:57.517 lat (usec) : 500=82.04%, 750=15.72% 00:22:57.517 lat (msec) : 4=0.01%, 50=2.22% 00:22:57.517 cpu : usr=89.50%, sys=9.26%, ctx=26, majf=0, minf=9 00:22:57.517 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:57.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:57.517 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:57.517 issued rwts: total=28588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:57.517 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:57.517 00:22:57.517 Run status group 0 (all jobs): 00:22:57.517 READ: bw=11.2MiB/s (11.7MB/s), 11.2MiB/s-11.2MiB/s (11.7MB/s-11.7MB/s), io=112MiB (117MB), run=10001-10001msec 00:22:57.517 16:08:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:22:57.517 16:08:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:22:57.517 16:08:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:22:57.517 16:08:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:57.517 16:08:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:22:57.517 16:08:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:57.517 16:08:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.517 16:08:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:57.517 16:08:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.517 16:08:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:57.517 16:08:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.517 16:08:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:57.517 16:08:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.517 00:22:57.517 real 0m11.034s 00:22:57.517 user 0m9.619s 00:22:57.517 sys 0m1.194s 00:22:57.517 16:08:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:57.517 ************************************ 00:22:57.517 END TEST fio_dif_1_default 00:22:57.517 16:08:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:57.517 ************************************ 00:22:57.517 16:08:49 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:22:57.517 16:08:49 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:22:57.517 16:08:49 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:57.517 16:08:49 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:57.517 16:08:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:57.517 ************************************ 00:22:57.517 START TEST fio_dif_1_multi_subsystems 00:22:57.517 ************************************ 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:57.518 bdev_null0 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:57.518 [2024-07-15 16:08:49.715333] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:57.518 bdev_null1 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:57.518 { 00:22:57.518 "params": { 00:22:57.518 "name": "Nvme$subsystem", 00:22:57.518 "trtype": "$TEST_TRANSPORT", 00:22:57.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.518 "adrfam": "ipv4", 00:22:57.518 "trsvcid": "$NVMF_PORT", 00:22:57.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.518 "hdgst": ${hdgst:-false}, 00:22:57.518 "ddgst": ${ddgst:-false} 00:22:57.518 }, 00:22:57.518 "method": "bdev_nvme_attach_controller" 00:22:57.518 } 00:22:57.518 EOF 00:22:57.518 )") 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:57.518 { 00:22:57.518 "params": { 00:22:57.518 "name": "Nvme$subsystem", 00:22:57.518 "trtype": "$TEST_TRANSPORT", 00:22:57.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.518 "adrfam": "ipv4", 00:22:57.518 "trsvcid": "$NVMF_PORT", 00:22:57.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.518 "hdgst": ${hdgst:-false}, 00:22:57.518 "ddgst": ${ddgst:-false} 00:22:57.518 }, 00:22:57.518 "method": "bdev_nvme_attach_controller" 00:22:57.518 } 00:22:57.518 EOF 00:22:57.518 )") 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:57.518 "params": { 00:22:57.518 "name": "Nvme0", 00:22:57.518 "trtype": "tcp", 00:22:57.518 "traddr": "10.0.0.2", 00:22:57.518 "adrfam": "ipv4", 00:22:57.518 "trsvcid": "4420", 00:22:57.518 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:57.518 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:57.518 "hdgst": false, 00:22:57.518 "ddgst": false 00:22:57.518 }, 00:22:57.518 "method": "bdev_nvme_attach_controller" 00:22:57.518 },{ 00:22:57.518 "params": { 00:22:57.518 "name": "Nvme1", 00:22:57.518 "trtype": "tcp", 00:22:57.518 "traddr": "10.0.0.2", 00:22:57.518 "adrfam": "ipv4", 00:22:57.518 "trsvcid": "4420", 00:22:57.518 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.518 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:57.518 "hdgst": false, 00:22:57.518 "ddgst": false 00:22:57.518 }, 00:22:57.518 "method": "bdev_nvme_attach_controller" 00:22:57.518 }' 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:57.518 16:08:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:57.518 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:57.518 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:57.518 fio-3.35 00:22:57.518 Starting 2 threads 00:23:07.484 00:23:07.484 filename0: (groupid=0, jobs=1): err= 0: pid=98592: Mon Jul 15 16:09:00 2024 00:23:07.484 read: IOPS=200, BW=803KiB/s (823kB/s)(8064KiB/10039msec) 00:23:07.484 slat (nsec): min=6487, max=41527, avg=9343.19, stdev=4286.34 00:23:07.484 clat (usec): min=403, max=42611, avg=19888.42, stdev=20234.44 00:23:07.484 lat (usec): min=410, max=42620, avg=19897.76, stdev=20234.56 00:23:07.484 clat percentiles (usec): 00:23:07.484 | 1.00th=[ 429], 5.00th=[ 449], 10.00th=[ 461], 20.00th=[ 478], 00:23:07.484 | 30.00th=[ 490], 40.00th=[ 510], 50.00th=[ 832], 60.00th=[40633], 00:23:07.484 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:23:07.484 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:23:07.484 | 99.99th=[42730] 00:23:07.484 bw ( KiB/s): min= 576, max= 1120, per=49.02%, avg=804.80, stdev=163.59, samples=20 00:23:07.484 iops : min= 144, max= 280, avg=201.20, stdev=40.90, samples=20 00:23:07.484 lat (usec) : 500=36.01%, 750=12.25%, 1000=3.62% 00:23:07.484 lat (msec) : 2=0.10%, 10=0.20%, 50=47.82% 00:23:07.484 cpu : usr=95.01%, sys=4.56%, ctx=16, majf=0, minf=0 00:23:07.484 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:07.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.484 issued rwts: total=2016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.484 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:07.484 filename1: (groupid=0, jobs=1): err= 0: pid=98593: Mon Jul 15 16:09:00 2024 00:23:07.484 read: IOPS=209, BW=840KiB/s (860kB/s)(8400KiB/10001msec) 00:23:07.484 slat (nsec): min=6553, max=69102, avg=9329.82, stdev=4763.72 00:23:07.484 clat (usec): min=398, max=42423, avg=19020.04, stdev=20169.09 00:23:07.484 lat (usec): min=405, max=42432, avg=19029.37, stdev=20169.07 00:23:07.484 clat percentiles (usec): 00:23:07.484 | 1.00th=[ 429], 5.00th=[ 449], 10.00th=[ 461], 20.00th=[ 478], 00:23:07.484 | 30.00th=[ 490], 40.00th=[ 506], 50.00th=[ 553], 60.00th=[40633], 00:23:07.484 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:23:07.484 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:23:07.484 | 99.99th=[42206] 00:23:07.484 bw ( KiB/s): min= 480, max= 1440, per=51.34%, avg=842.16, stdev=220.92, samples=19 00:23:07.484 iops : min= 120, max= 360, avg=210.53, stdev=55.24, samples=19 00:23:07.484 lat (usec) : 500=37.43%, 750=13.57%, 1000=3.10% 00:23:07.484 lat (msec) : 10=0.19%, 50=45.71% 00:23:07.484 cpu : usr=94.92%, sys=4.68%, ctx=14, majf=0, minf=9 00:23:07.484 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:07.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.484 issued rwts: total=2100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.484 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:07.484 00:23:07.484 Run status group 0 (all jobs): 00:23:07.484 READ: bw=1640KiB/s (1679kB/s), 803KiB/s-840KiB/s (823kB/s-860kB/s), io=16.1MiB (16.9MB), run=10001-10039msec 00:23:07.484 16:09:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:23:07.484 16:09:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:23:07.484 16:09:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:07.484 16:09:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:07.484 16:09:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:23:07.484 16:09:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:07.484 16:09:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.484 16:09:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:07.484 16:09:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.484 16:09:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:07.484 16:09:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.484 16:09:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:07.484 16:09:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.484 16:09:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:07.484 16:09:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:07.484 16:09:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:23:07.484 16:09:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:07.484 16:09:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.484 16:09:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:07.485 16:09:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.485 16:09:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:07.485 16:09:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.485 16:09:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:07.485 16:09:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.485 00:23:07.485 real 0m11.220s 00:23:07.485 user 0m19.905s 00:23:07.485 sys 0m1.194s 00:23:07.485 16:09:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:07.485 ************************************ 00:23:07.485 16:09:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:07.485 END TEST fio_dif_1_multi_subsystems 00:23:07.485 ************************************ 00:23:07.485 16:09:00 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:23:07.485 16:09:00 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:23:07.485 16:09:00 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:07.485 16:09:00 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:07.485 16:09:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:07.485 ************************************ 00:23:07.485 START TEST fio_dif_rand_params 00:23:07.485 ************************************ 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:07.485 bdev_null0 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:07.485 [2024-07-15 16:09:00.991157] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:07.485 { 00:23:07.485 "params": { 00:23:07.485 "name": "Nvme$subsystem", 00:23:07.485 "trtype": "$TEST_TRANSPORT", 00:23:07.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:07.485 "adrfam": "ipv4", 00:23:07.485 "trsvcid": "$NVMF_PORT", 00:23:07.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:07.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:07.485 "hdgst": ${hdgst:-false}, 00:23:07.485 "ddgst": ${ddgst:-false} 00:23:07.485 }, 00:23:07.485 "method": "bdev_nvme_attach_controller" 00:23:07.485 } 00:23:07.485 EOF 00:23:07.485 )") 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:07.485 16:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:23:07.485 16:09:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:07.485 16:09:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:07.485 16:09:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:07.485 16:09:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:07.485 16:09:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:23:07.485 16:09:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:23:07.485 16:09:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:07.485 "params": { 00:23:07.485 "name": "Nvme0", 00:23:07.485 "trtype": "tcp", 00:23:07.485 "traddr": "10.0.0.2", 00:23:07.485 "adrfam": "ipv4", 00:23:07.485 "trsvcid": "4420", 00:23:07.485 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:07.485 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:07.485 "hdgst": false, 00:23:07.485 "ddgst": false 00:23:07.485 }, 00:23:07.485 "method": "bdev_nvme_attach_controller" 00:23:07.485 }' 00:23:07.485 16:09:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:07.485 16:09:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:07.485 16:09:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:07.485 16:09:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:07.485 16:09:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:07.485 16:09:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:07.485 16:09:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:07.485 16:09:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:07.485 16:09:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:07.485 16:09:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:07.485 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:07.485 ... 00:23:07.485 fio-3.35 00:23:07.485 Starting 3 threads 00:23:14.061 00:23:14.061 filename0: (groupid=0, jobs=1): err= 0: pid=98748: Mon Jul 15 16:09:06 2024 00:23:14.061 read: IOPS=246, BW=30.8MiB/s (32.3MB/s)(154MiB/5005msec) 00:23:14.061 slat (nsec): min=6910, max=40688, avg=12006.61, stdev=4323.26 00:23:14.061 clat (usec): min=6505, max=53470, avg=12136.88, stdev=2244.01 00:23:14.061 lat (usec): min=6520, max=53484, avg=12148.89, stdev=2244.03 00:23:14.061 clat percentiles (usec): 00:23:14.061 | 1.00th=[ 8029], 5.00th=[10552], 10.00th=[10814], 20.00th=[11338], 00:23:14.061 | 30.00th=[11863], 40.00th=[11994], 50.00th=[12125], 60.00th=[12387], 00:23:14.061 | 70.00th=[12518], 80.00th=[12649], 90.00th=[12911], 95.00th=[13304], 00:23:14.061 | 99.00th=[14353], 99.50th=[14746], 99.90th=[53216], 99.95th=[53216], 00:23:14.061 | 99.99th=[53216] 00:23:14.061 bw ( KiB/s): min=29184, max=33024, per=34.14%, avg=31545.40, stdev=1004.57, samples=10 00:23:14.061 iops : min= 228, max= 258, avg=246.40, stdev= 7.88, samples=10 00:23:14.061 lat (msec) : 10=2.35%, 20=97.41%, 100=0.24% 00:23:14.061 cpu : usr=91.65%, sys=6.55%, ctx=64, majf=0, minf=0 00:23:14.061 IO depths : 1=9.0%, 2=91.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:14.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:14.061 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:14.061 issued rwts: total=1235,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:14.061 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:14.061 filename0: (groupid=0, jobs=1): err= 0: pid=98749: Mon Jul 15 16:09:06 2024 00:23:14.061 read: IOPS=199, BW=25.0MiB/s (26.2MB/s)(125MiB/5004msec) 00:23:14.061 slat (nsec): min=6843, max=36116, avg=9846.14, stdev=3721.10 00:23:14.061 clat (usec): min=8379, max=17614, avg=15002.22, stdev=1227.98 00:23:14.061 lat (usec): min=8387, max=17628, avg=15012.06, stdev=1228.07 00:23:14.061 clat percentiles (usec): 00:23:14.061 | 1.00th=[ 8848], 5.00th=[13829], 10.00th=[14091], 20.00th=[14484], 00:23:14.061 | 30.00th=[14746], 40.00th=[14877], 50.00th=[15008], 60.00th=[15270], 00:23:14.061 | 70.00th=[15533], 80.00th=[15795], 90.00th=[16188], 95.00th=[16450], 00:23:14.061 | 99.00th=[17171], 99.50th=[17433], 99.90th=[17695], 99.95th=[17695], 00:23:14.061 | 99.99th=[17695] 00:23:14.061 bw ( KiB/s): min=24576, max=26880, per=27.61%, avg=25514.67, stdev=640.00, samples=9 00:23:14.062 iops : min= 192, max= 210, avg=199.33, stdev= 5.00, samples=9 00:23:14.062 lat (msec) : 10=2.10%, 20=97.90% 00:23:14.062 cpu : usr=92.88%, sys=5.88%, ctx=6, majf=0, minf=0 00:23:14.062 IO depths : 1=33.1%, 2=66.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:14.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:14.062 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:14.062 issued rwts: total=999,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:14.062 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:14.062 filename0: (groupid=0, jobs=1): err= 0: pid=98750: Mon Jul 15 16:09:06 2024 00:23:14.062 read: IOPS=275, BW=34.5MiB/s (36.1MB/s)(173MiB/5006msec) 00:23:14.062 slat (nsec): min=7008, max=39664, avg=12341.01, stdev=3262.49 00:23:14.062 clat (usec): min=5238, max=50553, avg=10865.18, stdev=2011.92 00:23:14.062 lat (usec): min=5250, max=50566, avg=10877.52, stdev=2011.89 00:23:14.062 clat percentiles (usec): 00:23:14.062 | 1.00th=[ 7701], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10159], 00:23:14.062 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:23:14.062 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11731], 95.00th=[11994], 00:23:14.062 | 99.00th=[12387], 99.50th=[12649], 99.90th=[50594], 99.95th=[50594], 00:23:14.062 | 99.99th=[50594] 00:23:14.062 bw ( KiB/s): min=32512, max=36864, per=38.17%, avg=35276.80, stdev=1156.26, samples=10 00:23:14.062 iops : min= 254, max= 288, avg=275.60, stdev= 9.03, samples=10 00:23:14.062 lat (msec) : 10=14.93%, 20=84.86%, 50=0.07%, 100=0.14% 00:23:14.062 cpu : usr=91.95%, sys=6.47%, ctx=9, majf=0, minf=0 00:23:14.062 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:14.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:14.062 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:14.062 issued rwts: total=1380,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:14.062 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:14.062 00:23:14.062 Run status group 0 (all jobs): 00:23:14.062 READ: bw=90.2MiB/s (94.6MB/s), 25.0MiB/s-34.5MiB/s (26.2MB/s-36.1MB/s), io=452MiB (474MB), run=5004-5006msec 00:23:14.062 16:09:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:23:14.062 16:09:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:14.062 16:09:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:14.062 16:09:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:14.062 16:09:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:14.062 16:09:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:14.062 16:09:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.062 16:09:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:14.062 16:09:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.062 16:09:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:14.062 16:09:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.062 16:09:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:14.062 16:09:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.062 16:09:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:23:14.062 16:09:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:23:14.062 16:09:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:23:14.062 16:09:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:23:14.062 16:09:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:23:14.062 16:09:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:23:14.062 16:09:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:23:14.062 16:09:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:14.062 16:09:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:14.062 16:09:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:14.062 16:09:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:14.062 16:09:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:23:14.062 16:09:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.062 16:09:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:14.062 bdev_null0 00:23:14.062 16:09:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.062 16:09:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:14.062 16:09:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.062 16:09:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:14.062 16:09:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.062 16:09:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:14.062 16:09:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.062 16:09:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:14.062 [2024-07-15 16:09:07.012130] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:14.062 bdev_null1 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:14.062 bdev_null2 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:14.062 { 00:23:14.062 "params": { 00:23:14.062 "name": "Nvme$subsystem", 00:23:14.062 "trtype": "$TEST_TRANSPORT", 00:23:14.062 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:14.062 "adrfam": "ipv4", 00:23:14.062 "trsvcid": "$NVMF_PORT", 00:23:14.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:14.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:14.062 "hdgst": ${hdgst:-false}, 00:23:14.062 "ddgst": ${ddgst:-false} 00:23:14.062 }, 00:23:14.062 "method": "bdev_nvme_attach_controller" 00:23:14.062 } 00:23:14.062 EOF 00:23:14.062 )") 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:14.062 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:14.063 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:14.063 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:23:14.063 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:14.063 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:14.063 16:09:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:14.063 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:14.063 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:23:14.063 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:14.063 16:09:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:14.063 16:09:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:14.063 16:09:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:14.063 16:09:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:14.063 16:09:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:14.063 { 00:23:14.063 "params": { 00:23:14.063 "name": "Nvme$subsystem", 00:23:14.063 "trtype": "$TEST_TRANSPORT", 00:23:14.063 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:14.063 "adrfam": "ipv4", 00:23:14.063 "trsvcid": "$NVMF_PORT", 00:23:14.063 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:14.063 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:14.063 "hdgst": ${hdgst:-false}, 00:23:14.063 "ddgst": ${ddgst:-false} 00:23:14.063 }, 00:23:14.063 "method": "bdev_nvme_attach_controller" 00:23:14.063 } 00:23:14.063 EOF 00:23:14.063 )") 00:23:14.063 16:09:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:14.063 16:09:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:14.063 16:09:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:14.063 16:09:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:14.063 16:09:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:14.063 16:09:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:14.063 { 00:23:14.063 "params": { 00:23:14.063 "name": "Nvme$subsystem", 00:23:14.063 "trtype": "$TEST_TRANSPORT", 00:23:14.063 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:14.063 "adrfam": "ipv4", 00:23:14.063 "trsvcid": "$NVMF_PORT", 00:23:14.063 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:14.063 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:14.063 "hdgst": ${hdgst:-false}, 00:23:14.063 "ddgst": ${ddgst:-false} 00:23:14.063 }, 00:23:14.063 "method": "bdev_nvme_attach_controller" 00:23:14.063 } 00:23:14.063 EOF 00:23:14.063 )") 00:23:14.063 16:09:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:14.063 16:09:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:14.063 16:09:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:14.063 16:09:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:23:14.063 16:09:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:23:14.063 16:09:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:14.063 "params": { 00:23:14.063 "name": "Nvme0", 00:23:14.063 "trtype": "tcp", 00:23:14.063 "traddr": "10.0.0.2", 00:23:14.063 "adrfam": "ipv4", 00:23:14.063 "trsvcid": "4420", 00:23:14.063 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:14.063 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:14.063 "hdgst": false, 00:23:14.063 "ddgst": false 00:23:14.063 }, 00:23:14.063 "method": "bdev_nvme_attach_controller" 00:23:14.063 },{ 00:23:14.063 "params": { 00:23:14.063 "name": "Nvme1", 00:23:14.063 "trtype": "tcp", 00:23:14.063 "traddr": "10.0.0.2", 00:23:14.063 "adrfam": "ipv4", 00:23:14.063 "trsvcid": "4420", 00:23:14.063 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.063 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:14.063 "hdgst": false, 00:23:14.063 "ddgst": false 00:23:14.063 }, 00:23:14.063 "method": "bdev_nvme_attach_controller" 00:23:14.063 },{ 00:23:14.063 "params": { 00:23:14.063 "name": "Nvme2", 00:23:14.063 "trtype": "tcp", 00:23:14.063 "traddr": "10.0.0.2", 00:23:14.063 "adrfam": "ipv4", 00:23:14.063 "trsvcid": "4420", 00:23:14.063 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:14.063 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:14.063 "hdgst": false, 00:23:14.063 "ddgst": false 00:23:14.063 }, 00:23:14.063 "method": "bdev_nvme_attach_controller" 00:23:14.063 }' 00:23:14.063 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:14.063 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:14.063 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:14.063 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:14.063 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:14.063 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:14.063 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:14.063 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:14.063 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:14.063 16:09:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:14.063 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:14.063 ... 00:23:14.063 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:14.063 ... 00:23:14.063 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:14.063 ... 00:23:14.063 fio-3.35 00:23:14.063 Starting 24 threads 00:23:26.251 00:23:26.251 filename0: (groupid=0, jobs=1): err= 0: pid=98846: Mon Jul 15 16:09:18 2024 00:23:26.251 read: IOPS=219, BW=877KiB/s (898kB/s)(8812KiB/10047msec) 00:23:26.251 slat (usec): min=5, max=8021, avg=26.89, stdev=351.59 00:23:26.251 clat (msec): min=30, max=155, avg=72.68, stdev=25.91 00:23:26.251 lat (msec): min=30, max=155, avg=72.70, stdev=25.91 00:23:26.251 clat percentiles (msec): 00:23:26.251 | 1.00th=[ 36], 5.00th=[ 42], 10.00th=[ 45], 20.00th=[ 48], 00:23:26.251 | 30.00th=[ 54], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 73], 00:23:26.251 | 70.00th=[ 82], 80.00th=[ 94], 90.00th=[ 109], 95.00th=[ 123], 00:23:26.252 | 99.00th=[ 144], 99.50th=[ 150], 99.90th=[ 157], 99.95th=[ 157], 00:23:26.252 | 99.99th=[ 157] 00:23:26.252 bw ( KiB/s): min= 512, max= 1248, per=4.40%, avg=874.75, stdev=190.10, samples=20 00:23:26.252 iops : min= 128, max= 312, avg=218.65, stdev=47.55, samples=20 00:23:26.252 lat (msec) : 50=22.20%, 100=63.60%, 250=14.21% 00:23:26.252 cpu : usr=36.99%, sys=1.15%, ctx=1041, majf=0, minf=9 00:23:26.252 IO depths : 1=1.2%, 2=2.6%, 4=10.0%, 8=74.0%, 16=12.2%, 32=0.0%, >=64=0.0% 00:23:26.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.252 complete : 0=0.0%, 4=89.9%, 8=5.5%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.252 issued rwts: total=2203,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.252 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.252 filename0: (groupid=0, jobs=1): err= 0: pid=98847: Mon Jul 15 16:09:18 2024 00:23:26.252 read: IOPS=193, BW=773KiB/s (791kB/s)(7744KiB/10021msec) 00:23:26.252 slat (usec): min=7, max=8020, avg=14.73, stdev=182.08 00:23:26.252 clat (msec): min=37, max=192, avg=82.68, stdev=26.68 00:23:26.252 lat (msec): min=37, max=192, avg=82.69, stdev=26.68 00:23:26.252 clat percentiles (msec): 00:23:26.252 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 61], 00:23:26.252 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 85], 00:23:26.252 | 70.00th=[ 93], 80.00th=[ 101], 90.00th=[ 121], 95.00th=[ 144], 00:23:26.252 | 99.00th=[ 163], 99.50th=[ 192], 99.90th=[ 192], 99.95th=[ 192], 00:23:26.252 | 99.99th=[ 192] 00:23:26.252 bw ( KiB/s): min= 512, max= 1040, per=3.87%, avg=770.80, stdev=153.03, samples=20 00:23:26.252 iops : min= 128, max= 260, avg=192.70, stdev=38.26, samples=20 00:23:26.252 lat (msec) : 50=10.49%, 100=68.65%, 250=20.87% 00:23:26.252 cpu : usr=33.04%, sys=0.87%, ctx=967, majf=0, minf=9 00:23:26.252 IO depths : 1=1.3%, 2=3.2%, 4=10.9%, 8=72.3%, 16=12.3%, 32=0.0%, >=64=0.0% 00:23:26.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.252 complete : 0=0.0%, 4=90.4%, 8=5.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.252 issued rwts: total=1936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.252 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.252 filename0: (groupid=0, jobs=1): err= 0: pid=98848: Mon Jul 15 16:09:18 2024 00:23:26.252 read: IOPS=212, BW=849KiB/s (870kB/s)(8520KiB/10033msec) 00:23:26.252 slat (usec): min=3, max=8051, avg=17.44, stdev=231.16 00:23:26.252 clat (msec): min=34, max=155, avg=75.27, stdev=23.18 00:23:26.252 lat (msec): min=34, max=155, avg=75.29, stdev=23.18 00:23:26.252 clat percentiles (msec): 00:23:26.252 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 57], 00:23:26.252 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 74], 00:23:26.252 | 70.00th=[ 84], 80.00th=[ 93], 90.00th=[ 106], 95.00th=[ 121], 00:23:26.252 | 99.00th=[ 155], 99.50th=[ 155], 99.90th=[ 155], 99.95th=[ 155], 00:23:26.252 | 99.99th=[ 155] 00:23:26.252 bw ( KiB/s): min= 640, max= 1024, per=4.25%, avg=845.25, stdev=120.01, samples=20 00:23:26.252 iops : min= 160, max= 256, avg=211.30, stdev=30.01, samples=20 00:23:26.252 lat (msec) : 50=14.41%, 100=74.41%, 250=11.17% 00:23:26.252 cpu : usr=32.25%, sys=0.84%, ctx=992, majf=0, minf=9 00:23:26.252 IO depths : 1=0.8%, 2=1.7%, 4=10.2%, 8=74.6%, 16=12.8%, 32=0.0%, >=64=0.0% 00:23:26.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.252 complete : 0=0.0%, 4=89.9%, 8=5.5%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.252 issued rwts: total=2130,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.252 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.252 filename0: (groupid=0, jobs=1): err= 0: pid=98849: Mon Jul 15 16:09:18 2024 00:23:26.252 read: IOPS=249, BW=997KiB/s (1020kB/s)(9.77MiB/10043msec) 00:23:26.252 slat (nsec): min=4830, max=45807, avg=9857.88, stdev=3274.25 00:23:26.252 clat (msec): min=6, max=135, avg=64.10, stdev=21.06 00:23:26.252 lat (msec): min=6, max=135, avg=64.11, stdev=21.06 00:23:26.252 clat percentiles (msec): 00:23:26.252 | 1.00th=[ 13], 5.00th=[ 39], 10.00th=[ 44], 20.00th=[ 48], 00:23:26.252 | 30.00th=[ 51], 40.00th=[ 56], 50.00th=[ 62], 60.00th=[ 68], 00:23:26.252 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 92], 95.00th=[ 105], 00:23:26.252 | 99.00th=[ 131], 99.50th=[ 136], 99.90th=[ 136], 99.95th=[ 136], 00:23:26.252 | 99.99th=[ 136] 00:23:26.252 bw ( KiB/s): min= 731, max= 1280, per=5.00%, avg=994.15, stdev=139.62, samples=20 00:23:26.252 iops : min= 182, max= 320, avg=248.50, stdev=34.98, samples=20 00:23:26.252 lat (msec) : 10=0.64%, 20=0.64%, 50=28.74%, 100=63.55%, 250=6.43% 00:23:26.252 cpu : usr=44.70%, sys=1.44%, ctx=1266, majf=0, minf=9 00:23:26.252 IO depths : 1=0.6%, 2=1.3%, 4=7.3%, 8=77.7%, 16=13.1%, 32=0.0%, >=64=0.0% 00:23:26.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.252 complete : 0=0.0%, 4=89.3%, 8=6.4%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.252 issued rwts: total=2502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.252 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.252 filename0: (groupid=0, jobs=1): err= 0: pid=98850: Mon Jul 15 16:09:18 2024 00:23:26.252 read: IOPS=193, BW=774KiB/s (793kB/s)(7760KiB/10025msec) 00:23:26.252 slat (usec): min=4, max=5021, avg=15.19, stdev=137.84 00:23:26.252 clat (msec): min=38, max=152, avg=82.59, stdev=22.85 00:23:26.252 lat (msec): min=38, max=152, avg=82.61, stdev=22.85 00:23:26.252 clat percentiles (msec): 00:23:26.252 | 1.00th=[ 42], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 64], 00:23:26.252 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 80], 60.00th=[ 85], 00:23:26.252 | 70.00th=[ 95], 80.00th=[ 107], 90.00th=[ 115], 95.00th=[ 121], 00:23:26.252 | 99.00th=[ 138], 99.50th=[ 140], 99.90th=[ 153], 99.95th=[ 153], 00:23:26.252 | 99.99th=[ 153] 00:23:26.252 bw ( KiB/s): min= 512, max= 1000, per=3.86%, avg=768.60, stdev=132.30, samples=20 00:23:26.252 iops : min= 128, max= 250, avg=192.10, stdev=33.08, samples=20 00:23:26.252 lat (msec) : 50=10.00%, 100=65.31%, 250=24.69% 00:23:26.252 cpu : usr=38.15%, sys=1.13%, ctx=1066, majf=0, minf=9 00:23:26.252 IO depths : 1=1.9%, 2=4.3%, 4=12.7%, 8=69.6%, 16=11.5%, 32=0.0%, >=64=0.0% 00:23:26.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.252 complete : 0=0.0%, 4=90.8%, 8=4.4%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.252 issued rwts: total=1940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.252 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.252 filename0: (groupid=0, jobs=1): err= 0: pid=98851: Mon Jul 15 16:09:18 2024 00:23:26.252 read: IOPS=212, BW=850KiB/s (871kB/s)(8508KiB/10005msec) 00:23:26.252 slat (usec): min=4, max=4025, avg=16.42, stdev=150.54 00:23:26.252 clat (msec): min=35, max=154, avg=75.16, stdev=21.99 00:23:26.252 lat (msec): min=35, max=154, avg=75.17, stdev=21.99 00:23:26.252 clat percentiles (msec): 00:23:26.252 | 1.00th=[ 38], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 57], 00:23:26.252 | 30.00th=[ 62], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 79], 00:23:26.252 | 70.00th=[ 84], 80.00th=[ 92], 90.00th=[ 107], 95.00th=[ 113], 00:23:26.252 | 99.00th=[ 144], 99.50th=[ 155], 99.90th=[ 155], 99.95th=[ 155], 00:23:26.252 | 99.99th=[ 155] 00:23:26.252 bw ( KiB/s): min= 512, max= 1032, per=4.27%, avg=848.42, stdev=136.85, samples=19 00:23:26.252 iops : min= 128, max= 258, avg=212.11, stdev=34.21, samples=19 00:23:26.252 lat (msec) : 50=16.78%, 100=70.24%, 250=12.98% 00:23:26.252 cpu : usr=36.02%, sys=1.01%, ctx=1068, majf=0, minf=9 00:23:26.252 IO depths : 1=1.8%, 2=3.9%, 4=11.3%, 8=71.0%, 16=11.9%, 32=0.0%, >=64=0.0% 00:23:26.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.252 complete : 0=0.0%, 4=90.5%, 8=5.0%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.252 issued rwts: total=2127,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.252 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.252 filename0: (groupid=0, jobs=1): err= 0: pid=98852: Mon Jul 15 16:09:18 2024 00:23:26.252 read: IOPS=240, BW=962KiB/s (985kB/s)(9652KiB/10033msec) 00:23:26.252 slat (usec): min=3, max=4038, avg=11.39, stdev=82.08 00:23:26.252 clat (msec): min=4, max=155, avg=66.37, stdev=22.64 00:23:26.252 lat (msec): min=4, max=155, avg=66.38, stdev=22.63 00:23:26.252 clat percentiles (msec): 00:23:26.252 | 1.00th=[ 8], 5.00th=[ 38], 10.00th=[ 44], 20.00th=[ 48], 00:23:26.252 | 30.00th=[ 51], 40.00th=[ 58], 50.00th=[ 65], 60.00th=[ 72], 00:23:26.252 | 70.00th=[ 75], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 107], 00:23:26.252 | 99.00th=[ 129], 99.50th=[ 132], 99.90th=[ 157], 99.95th=[ 157], 00:23:26.252 | 99.99th=[ 157] 00:23:26.252 bw ( KiB/s): min= 736, max= 1267, per=4.84%, avg=962.15, stdev=156.98, samples=20 00:23:26.252 iops : min= 184, max= 316, avg=240.50, stdev=39.17, samples=20 00:23:26.252 lat (msec) : 10=1.99%, 50=26.32%, 100=64.19%, 250=7.50% 00:23:26.252 cpu : usr=39.41%, sys=1.03%, ctx=1104, majf=0, minf=9 00:23:26.252 IO depths : 1=0.3%, 2=0.7%, 4=5.9%, 8=79.5%, 16=13.6%, 32=0.0%, >=64=0.0% 00:23:26.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.252 complete : 0=0.0%, 4=88.9%, 8=7.0%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.252 issued rwts: total=2413,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.252 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.252 filename0: (groupid=0, jobs=1): err= 0: pid=98853: Mon Jul 15 16:09:18 2024 00:23:26.252 read: IOPS=198, BW=792KiB/s (811kB/s)(7956KiB/10045msec) 00:23:26.252 slat (nsec): min=3900, max=45592, avg=10346.69, stdev=3674.40 00:23:26.252 clat (msec): min=34, max=169, avg=80.64, stdev=24.48 00:23:26.252 lat (msec): min=34, max=169, avg=80.65, stdev=24.48 00:23:26.252 clat percentiles (msec): 00:23:26.252 | 1.00th=[ 39], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 61], 00:23:26.252 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 82], 00:23:26.252 | 70.00th=[ 94], 80.00th=[ 105], 90.00th=[ 118], 95.00th=[ 121], 00:23:26.252 | 99.00th=[ 144], 99.50th=[ 165], 99.90th=[ 171], 99.95th=[ 171], 00:23:26.252 | 99.99th=[ 171] 00:23:26.252 bw ( KiB/s): min= 512, max= 1072, per=3.97%, avg=790.45, stdev=158.01, samples=20 00:23:26.252 iops : min= 128, max= 268, avg=197.60, stdev=39.51, samples=20 00:23:26.252 lat (msec) : 50=10.76%, 100=64.76%, 250=24.48% 00:23:26.252 cpu : usr=43.15%, sys=1.12%, ctx=1235, majf=0, minf=9 00:23:26.252 IO depths : 1=2.6%, 2=5.7%, 4=15.1%, 8=66.2%, 16=10.4%, 32=0.0%, >=64=0.0% 00:23:26.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.252 complete : 0=0.0%, 4=91.5%, 8=3.3%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.252 issued rwts: total=1989,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.252 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.253 filename1: (groupid=0, jobs=1): err= 0: pid=98854: Mon Jul 15 16:09:18 2024 00:23:26.253 read: IOPS=209, BW=839KiB/s (859kB/s)(8424KiB/10040msec) 00:23:26.253 slat (usec): min=7, max=8020, avg=19.66, stdev=228.96 00:23:26.253 clat (msec): min=35, max=157, avg=76.12, stdev=23.82 00:23:26.253 lat (msec): min=35, max=157, avg=76.14, stdev=23.82 00:23:26.253 clat percentiles (msec): 00:23:26.253 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 55], 00:23:26.253 | 30.00th=[ 63], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 77], 00:23:26.253 | 70.00th=[ 89], 80.00th=[ 101], 90.00th=[ 112], 95.00th=[ 116], 00:23:26.253 | 99.00th=[ 140], 99.50th=[ 142], 99.90th=[ 144], 99.95th=[ 159], 00:23:26.253 | 99.99th=[ 159] 00:23:26.253 bw ( KiB/s): min= 512, max= 1272, per=4.20%, avg=835.70, stdev=178.87, samples=20 00:23:26.253 iops : min= 128, max= 318, avg=208.90, stdev=44.71, samples=20 00:23:26.253 lat (msec) : 50=15.43%, 100=65.05%, 250=19.52% 00:23:26.253 cpu : usr=45.42%, sys=1.31%, ctx=1436, majf=0, minf=9 00:23:26.253 IO depths : 1=2.6%, 2=6.0%, 4=16.0%, 8=65.2%, 16=10.2%, 32=0.0%, >=64=0.0% 00:23:26.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.253 complete : 0=0.0%, 4=91.7%, 8=2.9%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.253 issued rwts: total=2106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.253 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.253 filename1: (groupid=0, jobs=1): err= 0: pid=98855: Mon Jul 15 16:09:18 2024 00:23:26.253 read: IOPS=196, BW=787KiB/s (806kB/s)(7892KiB/10027msec) 00:23:26.253 slat (nsec): min=4927, max=30458, avg=10176.99, stdev=3399.97 00:23:26.253 clat (msec): min=36, max=146, avg=81.23, stdev=22.38 00:23:26.253 lat (msec): min=36, max=146, avg=81.24, stdev=22.38 00:23:26.253 clat percentiles (msec): 00:23:26.253 | 1.00th=[ 43], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 62], 00:23:26.253 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 85], 00:23:26.253 | 70.00th=[ 95], 80.00th=[ 97], 90.00th=[ 112], 95.00th=[ 121], 00:23:26.253 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 146], 99.95th=[ 146], 00:23:26.253 | 99.99th=[ 146] 00:23:26.253 bw ( KiB/s): min= 640, max= 976, per=3.93%, avg=782.50, stdev=86.94, samples=20 00:23:26.253 iops : min= 160, max= 244, avg=195.60, stdev=21.70, samples=20 00:23:26.253 lat (msec) : 50=10.49%, 100=70.65%, 250=18.85% 00:23:26.253 cpu : usr=32.14%, sys=0.97%, ctx=976, majf=0, minf=9 00:23:26.253 IO depths : 1=1.9%, 2=4.1%, 4=11.6%, 8=70.9%, 16=11.7%, 32=0.0%, >=64=0.0% 00:23:26.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.253 complete : 0=0.0%, 4=90.5%, 8=4.9%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.253 issued rwts: total=1973,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.253 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.253 filename1: (groupid=0, jobs=1): err= 0: pid=98856: Mon Jul 15 16:09:18 2024 00:23:26.253 read: IOPS=225, BW=904KiB/s (926kB/s)(9040KiB/10002msec) 00:23:26.253 slat (usec): min=4, max=8016, avg=19.99, stdev=228.82 00:23:26.253 clat (msec): min=30, max=191, avg=70.69, stdev=22.75 00:23:26.253 lat (msec): min=30, max=191, avg=70.71, stdev=22.75 00:23:26.253 clat percentiles (msec): 00:23:26.253 | 1.00th=[ 37], 5.00th=[ 45], 10.00th=[ 47], 20.00th=[ 51], 00:23:26.253 | 30.00th=[ 56], 40.00th=[ 63], 50.00th=[ 69], 60.00th=[ 73], 00:23:26.253 | 70.00th=[ 78], 80.00th=[ 87], 90.00th=[ 105], 95.00th=[ 112], 00:23:26.253 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 192], 99.95th=[ 192], 00:23:26.253 | 99.99th=[ 192] 00:23:26.253 bw ( KiB/s): min= 576, max= 1120, per=4.51%, avg=897.37, stdev=155.31, samples=19 00:23:26.253 iops : min= 144, max= 280, avg=224.32, stdev=38.83, samples=19 00:23:26.253 lat (msec) : 50=18.45%, 100=69.96%, 250=11.59% 00:23:26.253 cpu : usr=44.17%, sys=1.23%, ctx=1532, majf=0, minf=9 00:23:26.253 IO depths : 1=1.1%, 2=2.4%, 4=9.0%, 8=75.0%, 16=12.5%, 32=0.0%, >=64=0.0% 00:23:26.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.253 complete : 0=0.0%, 4=89.9%, 8=5.6%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.253 issued rwts: total=2260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.253 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.253 filename1: (groupid=0, jobs=1): err= 0: pid=98857: Mon Jul 15 16:09:18 2024 00:23:26.253 read: IOPS=217, BW=868KiB/s (889kB/s)(8720KiB/10041msec) 00:23:26.253 slat (usec): min=4, max=4984, avg=13.42, stdev=107.67 00:23:26.253 clat (msec): min=2, max=155, avg=73.45, stdev=27.48 00:23:26.253 lat (msec): min=2, max=155, avg=73.46, stdev=27.49 00:23:26.253 clat percentiles (msec): 00:23:26.253 | 1.00th=[ 6], 5.00th=[ 36], 10.00th=[ 45], 20.00th=[ 50], 00:23:26.253 | 30.00th=[ 59], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 78], 00:23:26.253 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 110], 95.00th=[ 121], 00:23:26.253 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:23:26.253 | 99.99th=[ 157] 00:23:26.253 bw ( KiB/s): min= 528, max= 1712, per=4.37%, avg=869.60, stdev=256.83, samples=20 00:23:26.253 iops : min= 132, max= 428, avg=217.40, stdev=64.21, samples=20 00:23:26.253 lat (msec) : 4=0.73%, 10=2.94%, 50=17.11%, 100=63.99%, 250=15.23% 00:23:26.253 cpu : usr=36.31%, sys=0.83%, ctx=1072, majf=0, minf=9 00:23:26.253 IO depths : 1=1.3%, 2=2.7%, 4=9.5%, 8=74.1%, 16=12.3%, 32=0.0%, >=64=0.0% 00:23:26.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.253 complete : 0=0.0%, 4=89.9%, 8=5.6%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.253 issued rwts: total=2180,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.253 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.253 filename1: (groupid=0, jobs=1): err= 0: pid=98858: Mon Jul 15 16:09:18 2024 00:23:26.253 read: IOPS=221, BW=888KiB/s (909kB/s)(8924KiB/10050msec) 00:23:26.253 slat (usec): min=4, max=8021, avg=30.31, stdev=393.23 00:23:26.253 clat (msec): min=33, max=152, avg=71.75, stdev=21.48 00:23:26.253 lat (msec): min=33, max=152, avg=71.78, stdev=21.49 00:23:26.253 clat percentiles (msec): 00:23:26.253 | 1.00th=[ 36], 5.00th=[ 43], 10.00th=[ 46], 20.00th=[ 51], 00:23:26.253 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 74], 00:23:26.253 | 70.00th=[ 83], 80.00th=[ 88], 90.00th=[ 105], 95.00th=[ 112], 00:23:26.253 | 99.00th=[ 130], 99.50th=[ 132], 99.90th=[ 153], 99.95th=[ 153], 00:23:26.253 | 99.99th=[ 153] 00:23:26.253 bw ( KiB/s): min= 640, max= 1200, per=4.45%, avg=885.95, stdev=148.08, samples=20 00:23:26.253 iops : min= 160, max= 300, avg=221.45, stdev=37.05, samples=20 00:23:26.253 lat (msec) : 50=19.45%, 100=69.57%, 250=10.98% 00:23:26.253 cpu : usr=36.50%, sys=1.04%, ctx=1076, majf=0, minf=9 00:23:26.253 IO depths : 1=1.1%, 2=2.6%, 4=9.1%, 8=74.8%, 16=12.5%, 32=0.0%, >=64=0.0% 00:23:26.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.253 complete : 0=0.0%, 4=89.9%, 8=5.6%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.253 issued rwts: total=2231,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.253 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.253 filename1: (groupid=0, jobs=1): err= 0: pid=98859: Mon Jul 15 16:09:18 2024 00:23:26.253 read: IOPS=209, BW=838KiB/s (858kB/s)(8408KiB/10031msec) 00:23:26.253 slat (usec): min=3, max=8019, avg=18.17, stdev=246.98 00:23:26.253 clat (msec): min=33, max=165, avg=76.13, stdev=21.25 00:23:26.253 lat (msec): min=33, max=165, avg=76.15, stdev=21.25 00:23:26.253 clat percentiles (msec): 00:23:26.253 | 1.00th=[ 37], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 61], 00:23:26.253 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 75], 00:23:26.253 | 70.00th=[ 84], 80.00th=[ 95], 90.00th=[ 107], 95.00th=[ 114], 00:23:26.253 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 165], 99.95th=[ 165], 00:23:26.253 | 99.99th=[ 165] 00:23:26.253 bw ( KiB/s): min= 544, max= 1072, per=4.20%, avg=834.45, stdev=115.21, samples=20 00:23:26.253 iops : min= 136, max= 268, avg=208.60, stdev=28.79, samples=20 00:23:26.253 lat (msec) : 50=12.70%, 100=75.36%, 250=11.94% 00:23:26.253 cpu : usr=37.60%, sys=1.02%, ctx=1243, majf=0, minf=9 00:23:26.253 IO depths : 1=1.4%, 2=3.5%, 4=12.3%, 8=71.0%, 16=11.8%, 32=0.0%, >=64=0.0% 00:23:26.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.253 complete : 0=0.0%, 4=90.6%, 8=4.4%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.253 issued rwts: total=2102,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.253 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.253 filename1: (groupid=0, jobs=1): err= 0: pid=98860: Mon Jul 15 16:09:18 2024 00:23:26.253 read: IOPS=213, BW=855KiB/s (876kB/s)(8616KiB/10072msec) 00:23:26.253 slat (usec): min=4, max=8023, avg=18.09, stdev=244.06 00:23:26.253 clat (msec): min=7, max=167, avg=74.57, stdev=22.47 00:23:26.253 lat (msec): min=7, max=167, avg=74.59, stdev=22.47 00:23:26.253 clat percentiles (msec): 00:23:26.253 | 1.00th=[ 17], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 57], 00:23:26.253 | 30.00th=[ 63], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 78], 00:23:26.253 | 70.00th=[ 85], 80.00th=[ 92], 90.00th=[ 105], 95.00th=[ 120], 00:23:26.253 | 99.00th=[ 133], 99.50th=[ 138], 99.90th=[ 153], 99.95th=[ 153], 00:23:26.253 | 99.99th=[ 169] 00:23:26.253 bw ( KiB/s): min= 640, max= 1274, per=4.30%, avg=854.90, stdev=152.77, samples=20 00:23:26.253 iops : min= 160, max= 318, avg=213.70, stdev=38.12, samples=20 00:23:26.253 lat (msec) : 10=0.74%, 20=0.74%, 50=13.74%, 100=72.24%, 250=12.53% 00:23:26.253 cpu : usr=39.86%, sys=1.04%, ctx=1045, majf=0, minf=9 00:23:26.253 IO depths : 1=1.8%, 2=3.9%, 4=11.3%, 8=71.1%, 16=11.9%, 32=0.0%, >=64=0.0% 00:23:26.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.253 complete : 0=0.0%, 4=90.5%, 8=5.1%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.253 issued rwts: total=2154,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.253 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.253 filename1: (groupid=0, jobs=1): err= 0: pid=98861: Mon Jul 15 16:09:18 2024 00:23:26.253 read: IOPS=178, BW=714KiB/s (731kB/s)(7152KiB/10021msec) 00:23:26.253 slat (usec): min=7, max=8061, avg=22.10, stdev=284.68 00:23:26.253 clat (msec): min=37, max=178, avg=89.55, stdev=25.54 00:23:26.253 lat (msec): min=37, max=178, avg=89.57, stdev=25.53 00:23:26.253 clat percentiles (msec): 00:23:26.253 | 1.00th=[ 40], 5.00th=[ 52], 10.00th=[ 63], 20.00th=[ 71], 00:23:26.253 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 85], 60.00th=[ 95], 00:23:26.253 | 70.00th=[ 101], 80.00th=[ 109], 90.00th=[ 124], 95.00th=[ 144], 00:23:26.253 | 99.00th=[ 150], 99.50th=[ 176], 99.90th=[ 180], 99.95th=[ 180], 00:23:26.253 | 99.99th=[ 180] 00:23:26.253 bw ( KiB/s): min= 384, max= 896, per=3.56%, avg=708.25, stdev=135.10, samples=20 00:23:26.253 iops : min= 96, max= 224, avg=177.05, stdev=33.76, samples=20 00:23:26.253 lat (msec) : 50=3.97%, 100=66.28%, 250=29.75% 00:23:26.253 cpu : usr=38.96%, sys=1.01%, ctx=1083, majf=0, minf=9 00:23:26.253 IO depths : 1=2.5%, 2=5.3%, 4=15.2%, 8=66.6%, 16=10.4%, 32=0.0%, >=64=0.0% 00:23:26.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.253 complete : 0=0.0%, 4=91.0%, 8=3.7%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.253 issued rwts: total=1788,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.253 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.253 filename2: (groupid=0, jobs=1): err= 0: pid=98862: Mon Jul 15 16:09:18 2024 00:23:26.254 read: IOPS=187, BW=748KiB/s (766kB/s)(7500KiB/10025msec) 00:23:26.254 slat (usec): min=4, max=3470, avg=12.22, stdev=79.98 00:23:26.254 clat (msec): min=26, max=178, avg=85.44, stdev=27.38 00:23:26.254 lat (msec): min=26, max=178, avg=85.45, stdev=27.38 00:23:26.254 clat percentiles (msec): 00:23:26.254 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 68], 00:23:26.254 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 87], 00:23:26.254 | 70.00th=[ 97], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 134], 00:23:26.254 | 99.00th=[ 157], 99.50th=[ 180], 99.90th=[ 180], 99.95th=[ 180], 00:23:26.254 | 99.99th=[ 180] 00:23:26.254 bw ( KiB/s): min= 512, max= 1000, per=3.74%, avg=743.10, stdev=146.15, samples=20 00:23:26.254 iops : min= 128, max= 250, avg=185.75, stdev=36.53, samples=20 00:23:26.254 lat (msec) : 50=11.41%, 100=59.36%, 250=29.23% 00:23:26.254 cpu : usr=34.40%, sys=1.11%, ctx=1113, majf=0, minf=9 00:23:26.254 IO depths : 1=2.7%, 2=6.0%, 4=15.6%, 8=65.2%, 16=10.5%, 32=0.0%, >=64=0.0% 00:23:26.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.254 complete : 0=0.0%, 4=91.6%, 8=3.4%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.254 issued rwts: total=1875,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.254 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.254 filename2: (groupid=0, jobs=1): err= 0: pid=98863: Mon Jul 15 16:09:18 2024 00:23:26.254 read: IOPS=250, BW=1004KiB/s (1028kB/s)(9.84MiB/10035msec) 00:23:26.254 slat (usec): min=3, max=7502, avg=19.07, stdev=209.99 00:23:26.254 clat (msec): min=3, max=133, avg=63.58, stdev=20.10 00:23:26.254 lat (msec): min=3, max=133, avg=63.60, stdev=20.11 00:23:26.254 clat percentiles (msec): 00:23:26.254 | 1.00th=[ 9], 5.00th=[ 41], 10.00th=[ 45], 20.00th=[ 48], 00:23:26.254 | 30.00th=[ 51], 40.00th=[ 56], 50.00th=[ 63], 60.00th=[ 69], 00:23:26.254 | 70.00th=[ 72], 80.00th=[ 78], 90.00th=[ 90], 95.00th=[ 103], 00:23:26.254 | 99.00th=[ 122], 99.50th=[ 128], 99.90th=[ 134], 99.95th=[ 134], 00:23:26.254 | 99.99th=[ 134] 00:23:26.254 bw ( KiB/s): min= 736, max= 1269, per=5.04%, avg=1002.65, stdev=155.68, samples=20 00:23:26.254 iops : min= 184, max= 317, avg=250.65, stdev=38.90, samples=20 00:23:26.254 lat (msec) : 4=0.64%, 10=1.11%, 20=0.16%, 50=26.97%, 100=65.93% 00:23:26.254 lat (msec) : 250=5.20% 00:23:26.254 cpu : usr=43.78%, sys=1.23%, ctx=1305, majf=0, minf=9 00:23:26.254 IO depths : 1=0.6%, 2=1.2%, 4=7.3%, 8=78.0%, 16=12.9%, 32=0.0%, >=64=0.0% 00:23:26.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.254 complete : 0=0.0%, 4=89.3%, 8=6.1%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.254 issued rwts: total=2518,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.254 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.254 filename2: (groupid=0, jobs=1): err= 0: pid=98864: Mon Jul 15 16:09:18 2024 00:23:26.254 read: IOPS=212, BW=851KiB/s (872kB/s)(8528KiB/10020msec) 00:23:26.254 slat (usec): min=5, max=8024, avg=17.32, stdev=198.92 00:23:26.254 clat (msec): min=38, max=159, avg=75.02, stdev=22.93 00:23:26.254 lat (msec): min=38, max=159, avg=75.04, stdev=22.93 00:23:26.254 clat percentiles (msec): 00:23:26.254 | 1.00th=[ 40], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 55], 00:23:26.254 | 30.00th=[ 62], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 77], 00:23:26.254 | 70.00th=[ 84], 80.00th=[ 95], 90.00th=[ 110], 95.00th=[ 120], 00:23:26.254 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 161], 99.95th=[ 161], 00:23:26.254 | 99.99th=[ 161] 00:23:26.254 bw ( KiB/s): min= 512, max= 1128, per=4.28%, avg=850.40, stdev=169.05, samples=20 00:23:26.254 iops : min= 128, max= 282, avg=212.60, stdev=42.26, samples=20 00:23:26.254 lat (msec) : 50=15.34%, 100=68.71%, 250=15.95% 00:23:26.254 cpu : usr=39.71%, sys=1.09%, ctx=1294, majf=0, minf=9 00:23:26.254 IO depths : 1=1.4%, 2=3.1%, 4=10.6%, 8=73.0%, 16=11.9%, 32=0.0%, >=64=0.0% 00:23:26.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.254 complete : 0=0.0%, 4=90.3%, 8=4.9%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.254 issued rwts: total=2132,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.254 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.254 filename2: (groupid=0, jobs=1): err= 0: pid=98865: Mon Jul 15 16:09:18 2024 00:23:26.254 read: IOPS=182, BW=730KiB/s (747kB/s)(7308KiB/10017msec) 00:23:26.254 slat (usec): min=3, max=8019, avg=18.31, stdev=211.09 00:23:26.254 clat (msec): min=20, max=199, avg=87.60, stdev=26.98 00:23:26.254 lat (msec): min=20, max=199, avg=87.62, stdev=26.99 00:23:26.254 clat percentiles (msec): 00:23:26.254 | 1.00th=[ 43], 5.00th=[ 50], 10.00th=[ 57], 20.00th=[ 69], 00:23:26.254 | 30.00th=[ 73], 40.00th=[ 77], 50.00th=[ 82], 60.00th=[ 91], 00:23:26.254 | 70.00th=[ 100], 80.00th=[ 108], 90.00th=[ 118], 95.00th=[ 144], 00:23:26.254 | 99.00th=[ 167], 99.50th=[ 190], 99.90th=[ 201], 99.95th=[ 201], 00:23:26.254 | 99.99th=[ 201] 00:23:26.254 bw ( KiB/s): min= 512, max= 896, per=3.64%, avg=724.20, stdev=105.49, samples=20 00:23:26.254 iops : min= 128, max= 224, avg=181.05, stdev=26.37, samples=20 00:23:26.254 lat (msec) : 50=5.47%, 100=67.00%, 250=27.53% 00:23:26.254 cpu : usr=40.36%, sys=1.21%, ctx=1140, majf=0, minf=9 00:23:26.254 IO depths : 1=2.9%, 2=6.6%, 4=17.4%, 8=63.4%, 16=9.8%, 32=0.0%, >=64=0.0% 00:23:26.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.254 complete : 0=0.0%, 4=91.9%, 8=2.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.254 issued rwts: total=1827,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.254 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.254 filename2: (groupid=0, jobs=1): err= 0: pid=98866: Mon Jul 15 16:09:18 2024 00:23:26.254 read: IOPS=205, BW=821KiB/s (841kB/s)(8240KiB/10033msec) 00:23:26.254 slat (usec): min=3, max=4025, avg=14.30, stdev=117.93 00:23:26.254 clat (msec): min=31, max=160, avg=77.84, stdev=21.63 00:23:26.254 lat (msec): min=31, max=160, avg=77.86, stdev=21.63 00:23:26.254 clat percentiles (msec): 00:23:26.254 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 61], 00:23:26.254 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 81], 00:23:26.254 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 108], 95.00th=[ 121], 00:23:26.254 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 161], 99.95th=[ 161], 00:23:26.254 | 99.99th=[ 161] 00:23:26.254 bw ( KiB/s): min= 640, max= 1010, per=4.11%, avg=817.35, stdev=107.73, samples=20 00:23:26.254 iops : min= 160, max= 252, avg=204.30, stdev=26.88, samples=20 00:23:26.254 lat (msec) : 50=10.15%, 100=74.22%, 250=15.63% 00:23:26.254 cpu : usr=39.21%, sys=0.88%, ctx=1227, majf=0, minf=9 00:23:26.254 IO depths : 1=1.2%, 2=2.6%, 4=10.6%, 8=73.3%, 16=12.3%, 32=0.0%, >=64=0.0% 00:23:26.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.254 complete : 0=0.0%, 4=90.1%, 8=5.3%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.254 issued rwts: total=2060,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.254 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.254 filename2: (groupid=0, jobs=1): err= 0: pid=98867: Mon Jul 15 16:09:18 2024 00:23:26.254 read: IOPS=175, BW=702KiB/s (719kB/s)(7028KiB/10016msec) 00:23:26.254 slat (usec): min=5, max=8021, avg=15.29, stdev=191.15 00:23:26.254 clat (msec): min=22, max=189, avg=91.06, stdev=26.53 00:23:26.254 lat (msec): min=22, max=189, avg=91.07, stdev=26.52 00:23:26.254 clat percentiles (msec): 00:23:26.254 | 1.00th=[ 39], 5.00th=[ 50], 10.00th=[ 63], 20.00th=[ 72], 00:23:26.254 | 30.00th=[ 72], 40.00th=[ 82], 50.00th=[ 85], 60.00th=[ 96], 00:23:26.254 | 70.00th=[ 106], 80.00th=[ 111], 90.00th=[ 123], 95.00th=[ 144], 00:23:26.254 | 99.00th=[ 165], 99.50th=[ 180], 99.90th=[ 190], 99.95th=[ 190], 00:23:26.254 | 99.99th=[ 190] 00:23:26.254 bw ( KiB/s): min= 472, max= 952, per=3.52%, avg=699.05, stdev=107.03, samples=20 00:23:26.254 iops : min= 118, max= 238, avg=174.75, stdev=26.76, samples=20 00:23:26.254 lat (msec) : 50=5.35%, 100=62.78%, 250=31.87% 00:23:26.254 cpu : usr=33.40%, sys=1.09%, ctx=1027, majf=0, minf=9 00:23:26.254 IO depths : 1=2.0%, 2=4.7%, 4=14.1%, 8=67.8%, 16=11.4%, 32=0.0%, >=64=0.0% 00:23:26.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.254 complete : 0=0.0%, 4=91.2%, 8=4.0%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.254 issued rwts: total=1757,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.254 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.254 filename2: (groupid=0, jobs=1): err= 0: pid=98868: Mon Jul 15 16:09:18 2024 00:23:26.254 read: IOPS=187, BW=750KiB/s (768kB/s)(7520KiB/10021msec) 00:23:26.254 slat (usec): min=5, max=8026, avg=23.65, stdev=319.88 00:23:26.254 clat (msec): min=23, max=179, avg=85.09, stdev=25.11 00:23:26.254 lat (msec): min=23, max=179, avg=85.11, stdev=25.11 00:23:26.254 clat percentiles (msec): 00:23:26.254 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 66], 00:23:26.254 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 83], 60.00th=[ 91], 00:23:26.254 | 70.00th=[ 97], 80.00th=[ 108], 90.00th=[ 118], 95.00th=[ 123], 00:23:26.254 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 180], 99.95th=[ 180], 00:23:26.254 | 99.99th=[ 180] 00:23:26.254 bw ( KiB/s): min= 512, max= 920, per=3.75%, avg=745.40, stdev=120.11, samples=20 00:23:26.254 iops : min= 128, max= 230, avg=186.35, stdev=30.03, samples=20 00:23:26.254 lat (msec) : 50=8.56%, 100=65.69%, 250=25.74% 00:23:26.254 cpu : usr=33.04%, sys=0.92%, ctx=962, majf=0, minf=9 00:23:26.254 IO depths : 1=2.3%, 2=5.1%, 4=13.5%, 8=68.4%, 16=10.7%, 32=0.0%, >=64=0.0% 00:23:26.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.254 complete : 0=0.0%, 4=91.1%, 8=3.8%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.254 issued rwts: total=1880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.254 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.254 filename2: (groupid=0, jobs=1): err= 0: pid=98869: Mon Jul 15 16:09:18 2024 00:23:26.254 read: IOPS=196, BW=785KiB/s (803kB/s)(7872KiB/10034msec) 00:23:26.254 slat (usec): min=4, max=8020, avg=14.62, stdev=180.60 00:23:26.254 clat (msec): min=37, max=162, avg=81.41, stdev=26.36 00:23:26.254 lat (msec): min=37, max=162, avg=81.43, stdev=26.37 00:23:26.254 clat percentiles (msec): 00:23:26.254 | 1.00th=[ 39], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 61], 00:23:26.254 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 83], 00:23:26.254 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 134], 00:23:26.254 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 163], 99.95th=[ 163], 00:23:26.254 | 99.99th=[ 163] 00:23:26.254 bw ( KiB/s): min= 512, max= 1120, per=3.92%, avg=780.70, stdev=183.84, samples=20 00:23:26.255 iops : min= 128, max= 280, avg=195.15, stdev=45.98, samples=20 00:23:26.255 lat (msec) : 50=13.97%, 100=63.57%, 250=22.46% 00:23:26.255 cpu : usr=32.13%, sys=0.95%, ctx=978, majf=0, minf=9 00:23:26.255 IO depths : 1=1.0%, 2=2.2%, 4=9.8%, 8=74.5%, 16=12.5%, 32=0.0%, >=64=0.0% 00:23:26.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.255 complete : 0=0.0%, 4=89.7%, 8=5.7%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.255 issued rwts: total=1968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.255 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.255 00:23:26.255 Run status group 0 (all jobs): 00:23:26.255 READ: bw=19.4MiB/s (20.4MB/s), 702KiB/s-1004KiB/s (719kB/s-1028kB/s), io=196MiB (205MB), run=10002-10072msec 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:26.255 bdev_null0 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:26.255 [2024-07-15 16:09:18.487112] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:26.255 bdev_null1 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:26.255 { 00:23:26.255 "params": { 00:23:26.255 "name": "Nvme$subsystem", 00:23:26.255 "trtype": "$TEST_TRANSPORT", 00:23:26.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.255 "adrfam": "ipv4", 00:23:26.255 "trsvcid": "$NVMF_PORT", 00:23:26.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.255 "hdgst": ${hdgst:-false}, 00:23:26.255 "ddgst": ${ddgst:-false} 00:23:26.255 }, 00:23:26.255 "method": "bdev_nvme_attach_controller" 00:23:26.255 } 00:23:26.255 EOF 00:23:26.255 )") 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:26.255 16:09:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:26.256 16:09:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:26.256 { 00:23:26.256 "params": { 00:23:26.256 "name": "Nvme$subsystem", 00:23:26.256 "trtype": "$TEST_TRANSPORT", 00:23:26.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.256 "adrfam": "ipv4", 00:23:26.256 "trsvcid": "$NVMF_PORT", 00:23:26.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.256 "hdgst": ${hdgst:-false}, 00:23:26.256 "ddgst": ${ddgst:-false} 00:23:26.256 }, 00:23:26.256 "method": "bdev_nvme_attach_controller" 00:23:26.256 } 00:23:26.256 EOF 00:23:26.256 )") 00:23:26.256 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:26.256 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:26.256 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:26.256 16:09:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:26.256 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:26.256 16:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:26.256 16:09:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:23:26.256 16:09:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:23:26.256 16:09:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:26.256 "params": { 00:23:26.256 "name": "Nvme0", 00:23:26.256 "trtype": "tcp", 00:23:26.256 "traddr": "10.0.0.2", 00:23:26.256 "adrfam": "ipv4", 00:23:26.256 "trsvcid": "4420", 00:23:26.256 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:26.256 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:26.256 "hdgst": false, 00:23:26.256 "ddgst": false 00:23:26.256 }, 00:23:26.256 "method": "bdev_nvme_attach_controller" 00:23:26.256 },{ 00:23:26.256 "params": { 00:23:26.256 "name": "Nvme1", 00:23:26.256 "trtype": "tcp", 00:23:26.256 "traddr": "10.0.0.2", 00:23:26.256 "adrfam": "ipv4", 00:23:26.256 "trsvcid": "4420", 00:23:26.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:26.256 "hdgst": false, 00:23:26.256 "ddgst": false 00:23:26.256 }, 00:23:26.256 "method": "bdev_nvme_attach_controller" 00:23:26.256 }' 00:23:26.256 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:26.256 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:26.256 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:26.256 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:26.256 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:26.256 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:26.256 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:26.256 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:26.256 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:26.256 16:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:26.256 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:26.256 ... 00:23:26.256 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:26.256 ... 00:23:26.256 fio-3.35 00:23:26.256 Starting 4 threads 00:23:31.556 00:23:31.556 filename0: (groupid=0, jobs=1): err= 0: pid=99000: Mon Jul 15 16:09:24 2024 00:23:31.556 read: IOPS=1969, BW=15.4MiB/s (16.1MB/s)(76.9MiB/5001msec) 00:23:31.556 slat (nsec): min=7346, max=45022, avg=12694.79, stdev=3620.91 00:23:31.556 clat (usec): min=2066, max=5604, avg=4003.90, stdev=130.85 00:23:31.556 lat (usec): min=2079, max=5630, avg=4016.60, stdev=130.68 00:23:31.556 clat percentiles (usec): 00:23:31.556 | 1.00th=[ 3884], 5.00th=[ 3916], 10.00th=[ 3949], 20.00th=[ 3949], 00:23:31.556 | 30.00th=[ 3982], 40.00th=[ 3982], 50.00th=[ 4015], 60.00th=[ 4015], 00:23:31.556 | 70.00th=[ 4047], 80.00th=[ 4047], 90.00th=[ 4080], 95.00th=[ 4113], 00:23:31.556 | 99.00th=[ 4178], 99.50th=[ 4359], 99.90th=[ 4948], 99.95th=[ 5538], 00:23:31.556 | 99.99th=[ 5604] 00:23:31.556 bw ( KiB/s): min=15519, max=15872, per=24.99%, avg=15747.44, stdev=120.44, samples=9 00:23:31.556 iops : min= 1939, max= 1984, avg=1968.33, stdev=15.26, samples=9 00:23:31.556 lat (msec) : 4=48.68%, 10=51.32% 00:23:31.556 cpu : usr=94.34%, sys=4.62%, ctx=6, majf=0, minf=9 00:23:31.556 IO depths : 1=12.1%, 2=25.0%, 4=50.0%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:31.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.556 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.556 issued rwts: total=9848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.556 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:31.556 filename0: (groupid=0, jobs=1): err= 0: pid=99001: Mon Jul 15 16:09:24 2024 00:23:31.556 read: IOPS=1969, BW=15.4MiB/s (16.1MB/s)(77.0MiB/5001msec) 00:23:31.556 slat (nsec): min=7135, max=67176, avg=9567.66, stdev=3182.15 00:23:31.556 clat (usec): min=1933, max=5176, avg=4030.58, stdev=136.21 00:23:31.556 lat (usec): min=1941, max=5188, avg=4040.14, stdev=136.28 00:23:31.556 clat percentiles (usec): 00:23:31.556 | 1.00th=[ 3458], 5.00th=[ 3949], 10.00th=[ 3982], 20.00th=[ 3982], 00:23:31.556 | 30.00th=[ 4015], 40.00th=[ 4015], 50.00th=[ 4015], 60.00th=[ 4047], 00:23:31.556 | 70.00th=[ 4047], 80.00th=[ 4080], 90.00th=[ 4113], 95.00th=[ 4146], 00:23:31.556 | 99.00th=[ 4621], 99.50th=[ 4686], 99.90th=[ 4752], 99.95th=[ 4817], 00:23:31.556 | 99.99th=[ 5145] 00:23:31.556 bw ( KiB/s): min=15616, max=15872, per=25.00%, avg=15758.22, stdev=84.45, samples=9 00:23:31.556 iops : min= 1952, max= 1984, avg=1969.78, stdev=10.56, samples=9 00:23:31.556 lat (msec) : 2=0.03%, 4=26.69%, 10=73.28% 00:23:31.556 cpu : usr=94.42%, sys=4.48%, ctx=7, majf=0, minf=0 00:23:31.556 IO depths : 1=0.1%, 2=0.1%, 4=74.9%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:31.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.556 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.556 issued rwts: total=9851,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.557 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:31.557 filename1: (groupid=0, jobs=1): err= 0: pid=99002: Mon Jul 15 16:09:24 2024 00:23:31.557 read: IOPS=1968, BW=15.4MiB/s (16.1MB/s)(76.9MiB/5002msec) 00:23:31.557 slat (nsec): min=4447, max=47677, avg=13152.17, stdev=4383.85 00:23:31.557 clat (usec): min=2117, max=6592, avg=3993.01, stdev=131.40 00:23:31.557 lat (usec): min=2124, max=6600, avg=4006.16, stdev=132.02 00:23:31.557 clat percentiles (usec): 00:23:31.557 | 1.00th=[ 3884], 5.00th=[ 3916], 10.00th=[ 3916], 20.00th=[ 3949], 00:23:31.557 | 30.00th=[ 3949], 40.00th=[ 3982], 50.00th=[ 3982], 60.00th=[ 4015], 00:23:31.557 | 70.00th=[ 4015], 80.00th=[ 4047], 90.00th=[ 4080], 95.00th=[ 4080], 00:23:31.557 | 99.00th=[ 4146], 99.50th=[ 4178], 99.90th=[ 6128], 99.95th=[ 6390], 00:23:31.557 | 99.99th=[ 6587] 00:23:31.557 bw ( KiB/s): min=15488, max=15872, per=24.98%, avg=15744.00, stdev=128.00, samples=9 00:23:31.557 iops : min= 1936, max= 1984, avg=1968.00, stdev=16.00, samples=9 00:23:31.557 lat (msec) : 4=57.53%, 10=42.47% 00:23:31.557 cpu : usr=93.64%, sys=5.26%, ctx=20, majf=0, minf=9 00:23:31.557 IO depths : 1=12.4%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:31.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.557 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.557 issued rwts: total=9848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.557 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:31.557 filename1: (groupid=0, jobs=1): err= 0: pid=99003: Mon Jul 15 16:09:24 2024 00:23:31.557 read: IOPS=1971, BW=15.4MiB/s (16.2MB/s)(77.1MiB/5003msec) 00:23:31.557 slat (nsec): min=7030, max=51568, avg=8505.88, stdev=2236.22 00:23:31.557 clat (usec): min=1283, max=4620, avg=4014.12, stdev=154.79 00:23:31.557 lat (usec): min=1305, max=4628, avg=4022.62, stdev=154.42 00:23:31.557 clat percentiles (usec): 00:23:31.557 | 1.00th=[ 3589], 5.00th=[ 3949], 10.00th=[ 3949], 20.00th=[ 3982], 00:23:31.557 | 30.00th=[ 3982], 40.00th=[ 4015], 50.00th=[ 4015], 60.00th=[ 4015], 00:23:31.557 | 70.00th=[ 4047], 80.00th=[ 4047], 90.00th=[ 4080], 95.00th=[ 4113], 00:23:31.557 | 99.00th=[ 4424], 99.50th=[ 4490], 99.90th=[ 4555], 99.95th=[ 4555], 00:23:31.557 | 99.99th=[ 4621] 00:23:31.557 bw ( KiB/s): min=15616, max=15872, per=25.03%, avg=15776.11, stdev=84.66, samples=9 00:23:31.557 iops : min= 1952, max= 1984, avg=1972.00, stdev=10.58, samples=9 00:23:31.557 lat (msec) : 2=0.16%, 4=36.65%, 10=63.19% 00:23:31.557 cpu : usr=93.68%, sys=5.22%, ctx=7, majf=0, minf=0 00:23:31.557 IO depths : 1=10.5%, 2=25.0%, 4=50.0%, 8=14.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:31.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.557 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.557 issued rwts: total=9864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.557 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:31.557 00:23:31.557 Run status group 0 (all jobs): 00:23:31.557 READ: bw=61.5MiB/s (64.5MB/s), 15.4MiB/s-15.4MiB/s (16.1MB/s-16.2MB/s), io=308MiB (323MB), run=5001-5003msec 00:23:31.557 16:09:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:23:31.557 16:09:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:31.557 16:09:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:31.557 16:09:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:31.557 16:09:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:31.557 16:09:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:31.557 16:09:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.557 16:09:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:31.557 16:09:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.557 16:09:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:31.557 16:09:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.557 16:09:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:31.557 16:09:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.557 16:09:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:31.557 16:09:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:31.557 16:09:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:23:31.557 16:09:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:31.557 16:09:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.557 16:09:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:31.557 16:09:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.557 16:09:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:31.557 16:09:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.557 16:09:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:31.557 16:09:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.557 00:23:31.557 real 0m23.656s 00:23:31.557 user 2m6.438s 00:23:31.557 sys 0m5.343s 00:23:31.557 16:09:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:31.557 ************************************ 00:23:31.557 16:09:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:31.557 END TEST fio_dif_rand_params 00:23:31.557 ************************************ 00:23:31.557 16:09:24 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:23:31.557 16:09:24 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:23:31.557 16:09:24 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:31.557 16:09:24 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:31.557 16:09:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:31.557 ************************************ 00:23:31.557 START TEST fio_dif_digest 00:23:31.557 ************************************ 00:23:31.557 16:09:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:23:31.557 16:09:24 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:23:31.557 16:09:24 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:23:31.557 16:09:24 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:23:31.557 16:09:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:23:31.557 16:09:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:23:31.557 16:09:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:23:31.557 16:09:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:23:31.557 16:09:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:23:31.557 16:09:24 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:23:31.557 16:09:24 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:23:31.557 16:09:24 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:23:31.557 16:09:24 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:23:31.557 16:09:24 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:23:31.557 16:09:24 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:23:31.557 16:09:24 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:31.558 bdev_null0 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:31.558 [2024-07-15 16:09:24.694917] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:31.558 { 00:23:31.558 "params": { 00:23:31.558 "name": "Nvme$subsystem", 00:23:31.558 "trtype": "$TEST_TRANSPORT", 00:23:31.558 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.558 "adrfam": "ipv4", 00:23:31.558 "trsvcid": "$NVMF_PORT", 00:23:31.558 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.558 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.558 "hdgst": ${hdgst:-false}, 00:23:31.558 "ddgst": ${ddgst:-false} 00:23:31.558 }, 00:23:31.558 "method": "bdev_nvme_attach_controller" 00:23:31.558 } 00:23:31.558 EOF 00:23:31.558 )") 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:31.558 "params": { 00:23:31.558 "name": "Nvme0", 00:23:31.558 "trtype": "tcp", 00:23:31.558 "traddr": "10.0.0.2", 00:23:31.558 "adrfam": "ipv4", 00:23:31.558 "trsvcid": "4420", 00:23:31.558 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:31.558 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:31.558 "hdgst": true, 00:23:31.558 "ddgst": true 00:23:31.558 }, 00:23:31.558 "method": "bdev_nvme_attach_controller" 00:23:31.558 }' 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:31.558 16:09:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:31.558 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:31.558 ... 00:23:31.558 fio-3.35 00:23:31.558 Starting 3 threads 00:23:43.750 00:23:43.750 filename0: (groupid=0, jobs=1): err= 0: pid=99105: Mon Jul 15 16:09:35 2024 00:23:43.750 read: IOPS=172, BW=21.5MiB/s (22.6MB/s)(216MiB/10006msec) 00:23:43.750 slat (usec): min=9, max=170, avg=14.54, stdev= 5.16 00:23:43.750 clat (usec): min=9475, max=20127, avg=17382.40, stdev=1025.54 00:23:43.750 lat (usec): min=9489, max=20141, avg=17396.94, stdev=1025.46 00:23:43.750 clat percentiles (usec): 00:23:43.750 | 1.00th=[12125], 5.00th=[16188], 10.00th=[16450], 20.00th=[16712], 00:23:43.750 | 30.00th=[16909], 40.00th=[17171], 50.00th=[17433], 60.00th=[17695], 00:23:43.750 | 70.00th=[17957], 80.00th=[18220], 90.00th=[18482], 95.00th=[18744], 00:23:43.750 | 99.00th=[19268], 99.50th=[19530], 99.90th=[20055], 99.95th=[20055], 00:23:43.750 | 99.99th=[20055] 00:23:43.750 bw ( KiB/s): min=21504, max=23040, per=27.44%, avg=22069.89, stdev=367.81, samples=19 00:23:43.750 iops : min= 168, max= 180, avg=172.42, stdev= 2.87, samples=19 00:23:43.750 lat (msec) : 10=0.06%, 20=99.83%, 50=0.12% 00:23:43.750 cpu : usr=92.30%, sys=5.99%, ctx=217, majf=0, minf=9 00:23:43.750 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:43.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.750 issued rwts: total=1725,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:43.750 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:43.750 filename0: (groupid=0, jobs=1): err= 0: pid=99106: Mon Jul 15 16:09:35 2024 00:23:43.750 read: IOPS=242, BW=30.3MiB/s (31.8MB/s)(303MiB/10006msec) 00:23:43.750 slat (nsec): min=7162, max=41324, avg=13593.70, stdev=3529.29 00:23:43.750 clat (usec): min=8820, max=53647, avg=12351.74, stdev=2124.83 00:23:43.750 lat (usec): min=8832, max=53669, avg=12365.34, stdev=2124.91 00:23:43.750 clat percentiles (usec): 00:23:43.750 | 1.00th=[10552], 5.00th=[11076], 10.00th=[11338], 20.00th=[11731], 00:23:43.750 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12256], 60.00th=[12387], 00:23:43.750 | 70.00th=[12649], 80.00th=[12780], 90.00th=[13042], 95.00th=[13304], 00:23:43.750 | 99.00th=[13829], 99.50th=[14222], 99.90th=[53216], 99.95th=[53216], 00:23:43.750 | 99.99th=[53740] 00:23:43.750 bw ( KiB/s): min=28672, max=32000, per=38.60%, avg=31043.37, stdev=831.03, samples=19 00:23:43.750 iops : min= 224, max= 250, avg=242.53, stdev= 6.49, samples=19 00:23:43.750 lat (msec) : 10=0.04%, 20=99.71%, 100=0.25% 00:23:43.750 cpu : usr=92.41%, sys=6.07%, ctx=11, majf=0, minf=0 00:23:43.750 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:43.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.750 issued rwts: total=2427,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:43.750 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:43.750 filename0: (groupid=0, jobs=1): err= 0: pid=99107: Mon Jul 15 16:09:35 2024 00:23:43.750 read: IOPS=215, BW=26.9MiB/s (28.2MB/s)(270MiB/10046msec) 00:23:43.750 slat (nsec): min=7162, max=56584, avg=12342.98, stdev=4331.38 00:23:43.750 clat (usec): min=7116, max=46756, avg=13892.62, stdev=1268.49 00:23:43.750 lat (usec): min=7129, max=46768, avg=13904.96, stdev=1268.14 00:23:43.750 clat percentiles (usec): 00:23:43.750 | 1.00th=[ 9110], 5.00th=[12387], 10.00th=[12780], 20.00th=[13304], 00:23:43.750 | 30.00th=[13566], 40.00th=[13698], 50.00th=[13960], 60.00th=[14091], 00:23:43.750 | 70.00th=[14353], 80.00th=[14615], 90.00th=[15008], 95.00th=[15401], 00:23:43.750 | 99.00th=[16188], 99.50th=[16450], 99.90th=[16909], 99.95th=[17171], 00:23:43.750 | 99.99th=[46924] 00:23:43.750 bw ( KiB/s): min=26368, max=29184, per=34.36%, avg=27635.20, stdev=661.73, samples=20 00:23:43.750 iops : min= 206, max= 228, avg=215.90, stdev= 5.17, samples=20 00:23:43.750 lat (msec) : 10=1.11%, 20=98.84%, 50=0.05% 00:23:43.750 cpu : usr=92.76%, sys=5.83%, ctx=759, majf=0, minf=9 00:23:43.750 IO depths : 1=7.2%, 2=92.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:43.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.750 issued rwts: total=2160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:43.750 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:43.750 00:23:43.750 Run status group 0 (all jobs): 00:23:43.750 READ: bw=78.5MiB/s (82.4MB/s), 21.5MiB/s-30.3MiB/s (22.6MB/s-31.8MB/s), io=789MiB (827MB), run=10006-10046msec 00:23:43.750 16:09:35 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:23:43.751 16:09:35 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:23:43.751 16:09:35 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:23:43.751 16:09:35 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:43.751 16:09:35 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:23:43.751 16:09:35 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:43.751 16:09:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.751 16:09:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:43.751 16:09:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.751 16:09:35 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:43.751 16:09:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.751 16:09:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:43.751 16:09:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.751 00:23:43.751 real 0m11.066s 00:23:43.751 user 0m28.521s 00:23:43.751 sys 0m2.043s 00:23:43.751 16:09:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:43.751 ************************************ 00:23:43.751 16:09:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:43.751 END TEST fio_dif_digest 00:23:43.751 ************************************ 00:23:43.751 16:09:35 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:23:43.751 16:09:35 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:23:43.751 16:09:35 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:23:43.751 16:09:35 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:43.751 16:09:35 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:23:43.751 16:09:35 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:43.751 16:09:35 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:23:43.751 16:09:35 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:43.751 16:09:35 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:43.751 rmmod nvme_tcp 00:23:43.751 rmmod nvme_fabrics 00:23:43.751 rmmod nvme_keyring 00:23:43.751 16:09:35 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:43.751 16:09:35 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:23:43.751 16:09:35 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:23:43.751 16:09:35 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 98347 ']' 00:23:43.751 16:09:35 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 98347 00:23:43.751 16:09:35 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 98347 ']' 00:23:43.751 16:09:35 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 98347 00:23:43.751 16:09:35 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:23:43.751 16:09:35 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:43.751 16:09:35 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 98347 00:23:43.751 16:09:35 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:43.751 16:09:35 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:43.751 killing process with pid 98347 00:23:43.751 16:09:35 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 98347' 00:23:43.751 16:09:35 nvmf_dif -- common/autotest_common.sh@967 -- # kill 98347 00:23:43.751 16:09:35 nvmf_dif -- common/autotest_common.sh@972 -- # wait 98347 00:23:43.751 16:09:36 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:23:43.751 16:09:36 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:43.751 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:43.751 Waiting for block devices as requested 00:23:43.751 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:43.751 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:43.751 16:09:36 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:43.751 16:09:36 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:43.751 16:09:36 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:43.751 16:09:36 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:43.751 16:09:36 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.751 16:09:36 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:43.751 16:09:36 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.751 16:09:36 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:43.751 ************************************ 00:23:43.751 00:23:43.751 real 1m0.080s 00:23:43.751 user 3m51.664s 00:23:43.751 sys 0m15.386s 00:23:43.751 16:09:36 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:43.751 16:09:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:43.751 END TEST nvmf_dif 00:23:43.751 ************************************ 00:23:43.751 16:09:36 -- common/autotest_common.sh@1142 -- # return 0 00:23:43.751 16:09:36 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:23:43.751 16:09:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:43.751 16:09:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:43.751 16:09:36 -- common/autotest_common.sh@10 -- # set +x 00:23:43.751 ************************************ 00:23:43.751 START TEST nvmf_abort_qd_sizes 00:23:43.751 ************************************ 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:23:43.751 * Looking for test storage... 00:23:43.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:43.751 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:43.752 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:43.752 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:43.752 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:43.752 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:43.752 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:43.752 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:43.752 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:43.752 Cannot find device "nvmf_tgt_br" 00:23:43.752 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:23:43.752 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:43.752 Cannot find device "nvmf_tgt_br2" 00:23:43.752 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:23:43.752 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:43.752 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:43.752 Cannot find device "nvmf_tgt_br" 00:23:43.752 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:23:43.752 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:43.752 Cannot find device "nvmf_tgt_br2" 00:23:43.752 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:23:43.752 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:43.752 16:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:43.752 16:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:43.752 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:43.752 16:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:23:43.752 16:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:43.752 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:43.752 16:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:23:43.752 16:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:43.752 16:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:43.752 16:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:43.752 16:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:43.752 16:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:43.752 16:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:43.752 16:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:43.752 16:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:43.752 16:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:43.752 16:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:43.752 16:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:43.752 16:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:43.752 16:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:43.752 16:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:43.752 16:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:43.752 16:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:43.752 16:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:43.752 16:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:43.752 16:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:43.752 16:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:43.752 16:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:43.752 16:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:43.752 16:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:43.752 16:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:43.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:43.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:23:43.752 00:23:43.752 --- 10.0.0.2 ping statistics --- 00:23:43.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.752 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:23:43.752 16:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:43.752 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:43.752 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:23:43.752 00:23:43.752 --- 10.0.0.3 ping statistics --- 00:23:43.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.752 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:23:43.752 16:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:43.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:43.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:23:43.752 00:23:43.752 --- 10.0.0.1 ping statistics --- 00:23:43.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.752 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:23:43.752 16:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:43.752 16:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:23:43.752 16:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:23:43.752 16:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:44.316 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:44.316 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:44.316 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:44.316 16:09:38 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:44.316 16:09:38 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:44.316 16:09:38 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:44.316 16:09:38 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:44.316 16:09:38 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:44.316 16:09:38 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:44.574 16:09:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:23:44.574 16:09:38 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:44.574 16:09:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:44.574 16:09:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:44.574 16:09:38 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=99692 00:23:44.574 16:09:38 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:23:44.574 16:09:38 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 99692 00:23:44.574 16:09:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 99692 ']' 00:23:44.574 16:09:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.574 16:09:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:44.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:44.574 16:09:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.574 16:09:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:44.574 16:09:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:44.574 [2024-07-15 16:09:38.118816] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:23:44.574 [2024-07-15 16:09:38.118918] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:44.574 [2024-07-15 16:09:38.261769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:44.832 [2024-07-15 16:09:38.388554] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:44.832 [2024-07-15 16:09:38.388619] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:44.832 [2024-07-15 16:09:38.388634] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:44.832 [2024-07-15 16:09:38.388645] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:44.832 [2024-07-15 16:09:38.388654] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:44.832 [2024-07-15 16:09:38.388807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:44.832 [2024-07-15 16:09:38.389465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:44.832 [2024-07-15 16:09:38.389530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:44.832 [2024-07-15 16:09:38.389534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:45.767 16:09:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:45.767 ************************************ 00:23:45.767 START TEST spdk_target_abort 00:23:45.767 ************************************ 00:23:45.767 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:23:45.767 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:45.768 spdk_targetn1 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:45.768 [2024-07-15 16:09:39.287385] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:45.768 [2024-07-15 16:09:39.315548] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:45.768 16:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:49.083 Initializing NVMe Controllers 00:23:49.083 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:23:49.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:49.083 Initialization complete. Launching workers. 00:23:49.083 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11330, failed: 0 00:23:49.083 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1098, failed to submit 10232 00:23:49.083 success 741, unsuccess 357, failed 0 00:23:49.083 16:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:49.083 16:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:52.361 Initializing NVMe Controllers 00:23:52.361 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:23:52.361 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:52.361 Initialization complete. Launching workers. 00:23:52.361 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5967, failed: 0 00:23:52.361 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1271, failed to submit 4696 00:23:52.361 success 237, unsuccess 1034, failed 0 00:23:52.361 16:09:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:52.361 16:09:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:55.671 Initializing NVMe Controllers 00:23:55.671 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:23:55.671 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:55.671 Initialization complete. Launching workers. 00:23:55.671 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30583, failed: 0 00:23:55.671 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2628, failed to submit 27955 00:23:55.671 success 467, unsuccess 2161, failed 0 00:23:55.671 16:09:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:23:55.671 16:09:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.671 16:09:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:55.671 16:09:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.671 16:09:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:23:55.671 16:09:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.671 16:09:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:56.604 16:09:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.604 16:09:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 99692 00:23:56.604 16:09:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 99692 ']' 00:23:56.604 16:09:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 99692 00:23:56.604 16:09:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:23:56.604 16:09:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:56.604 16:09:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99692 00:23:56.604 16:09:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:56.604 killing process with pid 99692 00:23:56.604 16:09:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:56.604 16:09:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99692' 00:23:56.604 16:09:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 99692 00:23:56.604 16:09:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 99692 00:23:56.862 00:23:56.862 real 0m11.213s 00:23:56.862 user 0m45.667s 00:23:56.862 sys 0m1.784s 00:23:56.862 16:09:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:56.862 16:09:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:56.862 ************************************ 00:23:56.862 END TEST spdk_target_abort 00:23:56.862 ************************************ 00:23:56.862 16:09:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:23:56.862 16:09:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:23:56.862 16:09:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:56.862 16:09:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:56.862 16:09:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:56.862 ************************************ 00:23:56.862 START TEST kernel_target_abort 00:23:56.862 ************************************ 00:23:56.862 16:09:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:23:56.862 16:09:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:23:56.862 16:09:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:23:56.862 16:09:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:56.862 16:09:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:56.862 16:09:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:56.862 16:09:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:56.862 16:09:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:56.862 16:09:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:56.862 16:09:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:56.862 16:09:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:56.862 16:09:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:56.862 16:09:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:56.862 16:09:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:56.862 16:09:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:56.862 16:09:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:56.862 16:09:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:56.862 16:09:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:56.862 16:09:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:23:56.862 16:09:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:56.862 16:09:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:56.862 16:09:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:56.862 16:09:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:57.121 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:57.427 Waiting for block devices as requested 00:23:57.427 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:57.427 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:57.427 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:57.427 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:57.427 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:57.427 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:57.427 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:57.427 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:57.427 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:57.427 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:57.427 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:23:57.427 No valid GPT data, bailing 00:23:57.427 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:57.427 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:23:57.427 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:23:57.427 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:57.427 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:57.427 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:23:57.427 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:23:57.427 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:23:57.427 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:23:57.427 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:57.427 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:23:57.427 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:23:57.427 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:23:57.685 No valid GPT data, bailing 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:23:57.685 No valid GPT data, bailing 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:23:57.685 No valid GPT data, bailing 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:57.685 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d --hostid=a185c444-aaeb-4d13-aa60-df1b0266600d -a 10.0.0.1 -t tcp -s 4420 00:23:57.943 00:23:57.943 Discovery Log Number of Records 2, Generation counter 2 00:23:57.943 =====Discovery Log Entry 0====== 00:23:57.943 trtype: tcp 00:23:57.943 adrfam: ipv4 00:23:57.943 subtype: current discovery subsystem 00:23:57.943 treq: not specified, sq flow control disable supported 00:23:57.943 portid: 1 00:23:57.943 trsvcid: 4420 00:23:57.943 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:57.943 traddr: 10.0.0.1 00:23:57.943 eflags: none 00:23:57.943 sectype: none 00:23:57.943 =====Discovery Log Entry 1====== 00:23:57.943 trtype: tcp 00:23:57.943 adrfam: ipv4 00:23:57.943 subtype: nvme subsystem 00:23:57.943 treq: not specified, sq flow control disable supported 00:23:57.943 portid: 1 00:23:57.943 trsvcid: 4420 00:23:57.943 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:57.943 traddr: 10.0.0.1 00:23:57.943 eflags: none 00:23:57.943 sectype: none 00:23:57.943 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:23:57.943 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:23:57.943 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:23:57.943 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:23:57.943 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:23:57.943 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:23:57.943 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:23:57.943 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:23:57.943 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:23:57.943 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:57.943 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:23:57.943 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:57.943 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:23:57.943 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:57.943 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:23:57.943 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:57.943 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:23:57.943 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:57.943 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:57.943 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:57.943 16:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:01.223 Initializing NVMe Controllers 00:24:01.223 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:01.223 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:01.223 Initialization complete. Launching workers. 00:24:01.223 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31933, failed: 0 00:24:01.223 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31933, failed to submit 0 00:24:01.223 success 0, unsuccess 31933, failed 0 00:24:01.223 16:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:01.223 16:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:04.500 Initializing NVMe Controllers 00:24:04.500 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:04.500 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:04.500 Initialization complete. Launching workers. 00:24:04.500 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 64139, failed: 0 00:24:04.500 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27529, failed to submit 36610 00:24:04.500 success 0, unsuccess 27529, failed 0 00:24:04.500 16:09:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:04.500 16:09:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:07.778 Initializing NVMe Controllers 00:24:07.778 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:07.778 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:07.778 Initialization complete. Launching workers. 00:24:07.778 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 74203, failed: 0 00:24:07.778 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18578, failed to submit 55625 00:24:07.778 success 0, unsuccess 18578, failed 0 00:24:07.778 16:10:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:24:07.778 16:10:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:07.778 16:10:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:24:07.778 16:10:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:07.778 16:10:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:07.778 16:10:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:07.778 16:10:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:07.778 16:10:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:07.778 16:10:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:07.778 16:10:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:08.036 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:09.455 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:09.713 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:09.713 00:24:09.713 real 0m12.815s 00:24:09.713 user 0m6.062s 00:24:09.713 sys 0m4.061s 00:24:09.713 16:10:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:09.713 16:10:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:09.713 ************************************ 00:24:09.713 END TEST kernel_target_abort 00:24:09.713 ************************************ 00:24:09.713 16:10:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:24:09.713 16:10:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:09.713 16:10:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:24:09.713 16:10:03 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:09.713 16:10:03 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:24:09.713 16:10:03 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:09.713 16:10:03 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:24:09.713 16:10:03 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:09.713 16:10:03 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:09.713 rmmod nvme_tcp 00:24:09.713 rmmod nvme_fabrics 00:24:09.713 rmmod nvme_keyring 00:24:09.713 16:10:03 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:09.713 16:10:03 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:24:09.713 16:10:03 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:24:09.713 16:10:03 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 99692 ']' 00:24:09.713 16:10:03 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 99692 00:24:09.713 16:10:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 99692 ']' 00:24:09.713 16:10:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 99692 00:24:09.713 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (99692) - No such process 00:24:09.713 Process with pid 99692 is not found 00:24:09.713 16:10:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 99692 is not found' 00:24:09.713 16:10:03 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:24:09.713 16:10:03 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:10.279 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:10.279 Waiting for block devices as requested 00:24:10.279 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:10.279 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:10.279 16:10:03 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:10.279 16:10:03 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:10.279 16:10:03 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:10.279 16:10:03 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:10.279 16:10:03 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.279 16:10:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:10.279 16:10:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.279 16:10:03 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:10.280 00:24:10.280 real 0m27.224s 00:24:10.280 user 0m52.887s 00:24:10.280 sys 0m7.199s 00:24:10.280 16:10:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:10.280 16:10:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:10.280 ************************************ 00:24:10.280 END TEST nvmf_abort_qd_sizes 00:24:10.280 ************************************ 00:24:10.539 16:10:04 -- common/autotest_common.sh@1142 -- # return 0 00:24:10.539 16:10:04 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:10.539 16:10:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:10.539 16:10:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:10.539 16:10:04 -- common/autotest_common.sh@10 -- # set +x 00:24:10.539 ************************************ 00:24:10.539 START TEST keyring_file 00:24:10.539 ************************************ 00:24:10.539 16:10:04 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:10.539 * Looking for test storage... 00:24:10.539 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:24:10.539 16:10:04 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:24:10.539 16:10:04 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:10.539 16:10:04 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:24:10.539 16:10:04 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:10.539 16:10:04 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:10.539 16:10:04 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:10.539 16:10:04 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:10.539 16:10:04 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:10.539 16:10:04 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:10.539 16:10:04 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:10.539 16:10:04 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:10.539 16:10:04 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:10.539 16:10:04 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:10.539 16:10:04 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:24:10.539 16:10:04 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:24:10.539 16:10:04 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:10.539 16:10:04 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:10.539 16:10:04 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:10.539 16:10:04 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:10.539 16:10:04 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:10.539 16:10:04 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:10.539 16:10:04 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:10.539 16:10:04 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:10.539 16:10:04 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.539 16:10:04 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.539 16:10:04 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.539 16:10:04 keyring_file -- paths/export.sh@5 -- # export PATH 00:24:10.539 16:10:04 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.539 16:10:04 keyring_file -- nvmf/common.sh@47 -- # : 0 00:24:10.539 16:10:04 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:10.539 16:10:04 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:10.539 16:10:04 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:10.539 16:10:04 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:10.539 16:10:04 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:10.539 16:10:04 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:10.539 16:10:04 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:10.539 16:10:04 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:10.539 16:10:04 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:10.539 16:10:04 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:10.539 16:10:04 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:10.539 16:10:04 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:24:10.539 16:10:04 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:24:10.539 16:10:04 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:24:10.539 16:10:04 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:10.539 16:10:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:10.539 16:10:04 keyring_file -- keyring/common.sh@17 -- # name=key0 00:24:10.539 16:10:04 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:10.539 16:10:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:10.539 16:10:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:10.539 16:10:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Gbnccf68XP 00:24:10.539 16:10:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:10.539 16:10:04 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:10.539 16:10:04 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:24:10.539 16:10:04 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:10.539 16:10:04 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:24:10.539 16:10:04 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:24:10.539 16:10:04 keyring_file -- nvmf/common.sh@705 -- # python - 00:24:10.539 16:10:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Gbnccf68XP 00:24:10.539 16:10:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Gbnccf68XP 00:24:10.539 16:10:04 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.Gbnccf68XP 00:24:10.540 16:10:04 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:24:10.540 16:10:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:10.540 16:10:04 keyring_file -- keyring/common.sh@17 -- # name=key1 00:24:10.540 16:10:04 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:10.540 16:10:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:10.540 16:10:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:10.540 16:10:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.0pLaxKEfYB 00:24:10.540 16:10:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:10.540 16:10:04 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:10.540 16:10:04 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:24:10.540 16:10:04 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:10.540 16:10:04 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:24:10.540 16:10:04 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:24:10.540 16:10:04 keyring_file -- nvmf/common.sh@705 -- # python - 00:24:10.540 16:10:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.0pLaxKEfYB 00:24:10.812 16:10:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.0pLaxKEfYB 00:24:10.812 16:10:04 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.0pLaxKEfYB 00:24:10.812 16:10:04 keyring_file -- keyring/file.sh@30 -- # tgtpid=100578 00:24:10.812 16:10:04 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:10.812 16:10:04 keyring_file -- keyring/file.sh@32 -- # waitforlisten 100578 00:24:10.812 16:10:04 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 100578 ']' 00:24:10.812 16:10:04 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.812 16:10:04 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:10.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.812 16:10:04 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.812 16:10:04 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:10.812 16:10:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:10.812 [2024-07-15 16:10:04.336828] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:24:10.812 [2024-07-15 16:10:04.336942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100578 ] 00:24:10.812 [2024-07-15 16:10:04.473524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.078 [2024-07-15 16:10:04.594953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.643 16:10:05 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:11.643 16:10:05 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:24:11.643 16:10:05 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:24:11.643 16:10:05 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.643 16:10:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:11.643 [2024-07-15 16:10:05.350117] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:11.643 null0 00:24:11.901 [2024-07-15 16:10:05.382081] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:11.901 [2024-07-15 16:10:05.382298] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:11.901 [2024-07-15 16:10:05.390094] tcp.c:3710:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:11.901 16:10:05 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.901 16:10:05 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:11.901 16:10:05 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:24:11.901 16:10:05 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:11.901 16:10:05 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:11.901 16:10:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:11.901 16:10:05 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:11.901 16:10:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:11.901 16:10:05 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:11.901 16:10:05 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.901 16:10:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:11.901 [2024-07-15 16:10:05.402125] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:24:11.901 2024/07/15 16:10:05 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:24:11.901 request: 00:24:11.901 { 00:24:11.901 "method": "nvmf_subsystem_add_listener", 00:24:11.901 "params": { 00:24:11.901 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:24:11.901 "secure_channel": false, 00:24:11.901 "listen_address": { 00:24:11.901 "trtype": "tcp", 00:24:11.901 "traddr": "127.0.0.1", 00:24:11.901 "trsvcid": "4420" 00:24:11.901 } 00:24:11.901 } 00:24:11.901 } 00:24:11.901 Got JSON-RPC error response 00:24:11.901 GoRPCClient: error on JSON-RPC call 00:24:11.901 16:10:05 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:11.901 16:10:05 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:24:11.901 16:10:05 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:11.901 16:10:05 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:11.901 16:10:05 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:11.901 16:10:05 keyring_file -- keyring/file.sh@46 -- # bperfpid=100613 00:24:11.901 16:10:05 keyring_file -- keyring/file.sh@48 -- # waitforlisten 100613 /var/tmp/bperf.sock 00:24:11.901 16:10:05 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:24:11.901 16:10:05 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 100613 ']' 00:24:11.901 16:10:05 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:11.901 16:10:05 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:11.901 16:10:05 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:11.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:11.901 16:10:05 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:11.901 16:10:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:11.901 [2024-07-15 16:10:05.467864] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:24:11.901 [2024-07-15 16:10:05.467974] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100613 ] 00:24:11.901 [2024-07-15 16:10:05.611494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.158 [2024-07-15 16:10:05.742090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.782 16:10:06 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:12.782 16:10:06 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:24:12.782 16:10:06 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Gbnccf68XP 00:24:12.782 16:10:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Gbnccf68XP 00:24:13.040 16:10:06 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.0pLaxKEfYB 00:24:13.040 16:10:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.0pLaxKEfYB 00:24:13.298 16:10:06 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:24:13.298 16:10:06 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:24:13.298 16:10:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:13.298 16:10:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:13.298 16:10:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:13.555 16:10:07 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.Gbnccf68XP == \/\t\m\p\/\t\m\p\.\G\b\n\c\c\f\6\8\X\P ]] 00:24:13.555 16:10:07 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:24:13.555 16:10:07 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:24:13.555 16:10:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:13.555 16:10:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:13.555 16:10:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:13.812 16:10:07 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.0pLaxKEfYB == \/\t\m\p\/\t\m\p\.\0\p\L\a\x\K\E\f\Y\B ]] 00:24:13.812 16:10:07 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:24:13.812 16:10:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:13.812 16:10:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:13.812 16:10:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:13.812 16:10:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:13.812 16:10:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:14.069 16:10:07 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:24:14.069 16:10:07 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:24:14.069 16:10:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:14.069 16:10:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:14.069 16:10:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:14.069 16:10:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:14.069 16:10:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:14.326 16:10:07 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:24:14.326 16:10:07 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:14.326 16:10:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:14.582 [2024-07-15 16:10:08.140885] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:14.582 nvme0n1 00:24:14.582 16:10:08 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:24:14.582 16:10:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:14.582 16:10:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:14.582 16:10:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:14.582 16:10:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:14.582 16:10:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:14.837 16:10:08 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:24:14.837 16:10:08 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:24:14.837 16:10:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:14.837 16:10:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:14.837 16:10:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:14.837 16:10:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:14.837 16:10:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:15.403 16:10:08 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:24:15.403 16:10:08 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:15.403 Running I/O for 1 seconds... 00:24:16.358 00:24:16.358 Latency(us) 00:24:16.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.358 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:24:16.358 nvme0n1 : 1.01 11738.16 45.85 0.00 0.00 10865.88 5093.93 17754.30 00:24:16.358 =================================================================================================================== 00:24:16.358 Total : 11738.16 45.85 0.00 0.00 10865.88 5093.93 17754.30 00:24:16.358 0 00:24:16.358 16:10:09 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:16.358 16:10:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:16.616 16:10:10 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:24:16.616 16:10:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:16.616 16:10:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:16.616 16:10:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:16.616 16:10:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:16.616 16:10:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:16.873 16:10:10 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:24:16.873 16:10:10 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:24:16.873 16:10:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:16.873 16:10:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:16.873 16:10:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:16.873 16:10:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:16.873 16:10:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:17.132 16:10:10 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:24:17.132 16:10:10 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:17.132 16:10:10 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:24:17.132 16:10:10 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:17.132 16:10:10 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:24:17.132 16:10:10 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:17.132 16:10:10 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:24:17.132 16:10:10 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:17.132 16:10:10 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:17.132 16:10:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:17.391 [2024-07-15 16:10:11.067027] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:17.391 [2024-07-15 16:10:11.067214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1445f30 (107): Transport endpoint is not connected 00:24:17.391 [2024-07-15 16:10:11.068204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1445f30 (9): Bad file descriptor 00:24:17.391 [2024-07-15 16:10:11.069200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:17.391 [2024-07-15 16:10:11.069223] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:24:17.391 [2024-07-15 16:10:11.069234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:17.391 2024/07/15 16:10:11 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:24:17.391 request: 00:24:17.391 { 00:24:17.391 "method": "bdev_nvme_attach_controller", 00:24:17.391 "params": { 00:24:17.391 "name": "nvme0", 00:24:17.391 "trtype": "tcp", 00:24:17.391 "traddr": "127.0.0.1", 00:24:17.391 "adrfam": "ipv4", 00:24:17.391 "trsvcid": "4420", 00:24:17.391 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:17.391 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:17.391 "prchk_reftag": false, 00:24:17.391 "prchk_guard": false, 00:24:17.391 "hdgst": false, 00:24:17.391 "ddgst": false, 00:24:17.391 "psk": "key1" 00:24:17.391 } 00:24:17.391 } 00:24:17.391 Got JSON-RPC error response 00:24:17.391 GoRPCClient: error on JSON-RPC call 00:24:17.391 16:10:11 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:24:17.391 16:10:11 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:17.391 16:10:11 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:17.391 16:10:11 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:17.391 16:10:11 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:24:17.391 16:10:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:17.391 16:10:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:17.391 16:10:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:17.391 16:10:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:17.391 16:10:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:17.648 16:10:11 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:24:17.648 16:10:11 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:24:17.648 16:10:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:17.648 16:10:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:17.648 16:10:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:17.648 16:10:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:17.648 16:10:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:17.906 16:10:11 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:24:17.906 16:10:11 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:24:17.906 16:10:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:18.164 16:10:11 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:24:18.164 16:10:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:24:18.421 16:10:12 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:24:18.421 16:10:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:18.421 16:10:12 keyring_file -- keyring/file.sh@77 -- # jq length 00:24:18.679 16:10:12 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:24:18.679 16:10:12 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.Gbnccf68XP 00:24:18.679 16:10:12 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.Gbnccf68XP 00:24:18.679 16:10:12 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:24:18.679 16:10:12 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.Gbnccf68XP 00:24:18.679 16:10:12 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:24:18.679 16:10:12 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:18.679 16:10:12 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:24:18.679 16:10:12 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:18.679 16:10:12 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Gbnccf68XP 00:24:18.679 16:10:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Gbnccf68XP 00:24:18.938 [2024-07-15 16:10:12.602676] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Gbnccf68XP': 0100660 00:24:18.938 [2024-07-15 16:10:12.602724] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:18.938 2024/07/15 16:10:12 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.Gbnccf68XP], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:24:18.938 request: 00:24:18.938 { 00:24:18.938 "method": "keyring_file_add_key", 00:24:18.938 "params": { 00:24:18.938 "name": "key0", 00:24:18.938 "path": "/tmp/tmp.Gbnccf68XP" 00:24:18.938 } 00:24:18.938 } 00:24:18.938 Got JSON-RPC error response 00:24:18.938 GoRPCClient: error on JSON-RPC call 00:24:18.938 16:10:12 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:24:18.938 16:10:12 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:18.938 16:10:12 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:18.938 16:10:12 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:18.938 16:10:12 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.Gbnccf68XP 00:24:18.938 16:10:12 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Gbnccf68XP 00:24:18.938 16:10:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Gbnccf68XP 00:24:19.505 16:10:12 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.Gbnccf68XP 00:24:19.505 16:10:12 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:24:19.505 16:10:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:19.505 16:10:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:19.505 16:10:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:19.505 16:10:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:19.505 16:10:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:19.762 16:10:13 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:24:19.762 16:10:13 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:19.762 16:10:13 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:24:19.762 16:10:13 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:19.762 16:10:13 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:24:19.762 16:10:13 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:19.762 16:10:13 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:24:19.762 16:10:13 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:19.762 16:10:13 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:19.763 16:10:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:20.020 [2024-07-15 16:10:13.519047] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.Gbnccf68XP': No such file or directory 00:24:20.020 [2024-07-15 16:10:13.519099] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:24:20.020 [2024-07-15 16:10:13.519133] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:24:20.020 [2024-07-15 16:10:13.519153] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:20.020 [2024-07-15 16:10:13.519166] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:24:20.020 2024/07/15 16:10:13 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:24:20.020 request: 00:24:20.020 { 00:24:20.020 "method": "bdev_nvme_attach_controller", 00:24:20.020 "params": { 00:24:20.020 "name": "nvme0", 00:24:20.020 "trtype": "tcp", 00:24:20.020 "traddr": "127.0.0.1", 00:24:20.020 "adrfam": "ipv4", 00:24:20.020 "trsvcid": "4420", 00:24:20.020 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:20.020 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:20.020 "prchk_reftag": false, 00:24:20.020 "prchk_guard": false, 00:24:20.020 "hdgst": false, 00:24:20.020 "ddgst": false, 00:24:20.020 "psk": "key0" 00:24:20.020 } 00:24:20.020 } 00:24:20.020 Got JSON-RPC error response 00:24:20.020 GoRPCClient: error on JSON-RPC call 00:24:20.020 16:10:13 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:24:20.020 16:10:13 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:20.020 16:10:13 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:20.020 16:10:13 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:20.020 16:10:13 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:24:20.020 16:10:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:20.277 16:10:13 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:20.277 16:10:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:20.277 16:10:13 keyring_file -- keyring/common.sh@17 -- # name=key0 00:24:20.277 16:10:13 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:20.277 16:10:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:20.278 16:10:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:20.278 16:10:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.YFmzflWp7v 00:24:20.278 16:10:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:20.278 16:10:13 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:20.278 16:10:13 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:24:20.278 16:10:13 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:20.278 16:10:13 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:24:20.278 16:10:13 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:24:20.278 16:10:13 keyring_file -- nvmf/common.sh@705 -- # python - 00:24:20.278 16:10:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.YFmzflWp7v 00:24:20.278 16:10:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.YFmzflWp7v 00:24:20.278 16:10:13 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.YFmzflWp7v 00:24:20.278 16:10:13 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YFmzflWp7v 00:24:20.278 16:10:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YFmzflWp7v 00:24:20.535 16:10:14 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:20.535 16:10:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:20.793 nvme0n1 00:24:20.793 16:10:14 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:24:20.793 16:10:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:20.793 16:10:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:20.793 16:10:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:20.793 16:10:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:20.793 16:10:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:21.051 16:10:14 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:24:21.051 16:10:14 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:24:21.051 16:10:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:21.308 16:10:15 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:24:21.308 16:10:15 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:24:21.308 16:10:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:21.308 16:10:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:21.308 16:10:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:21.566 16:10:15 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:24:21.566 16:10:15 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:24:21.566 16:10:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:21.566 16:10:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:21.566 16:10:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:21.566 16:10:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:21.566 16:10:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:21.823 16:10:15 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:24:21.823 16:10:15 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:21.823 16:10:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:22.081 16:10:15 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:24:22.081 16:10:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:22.081 16:10:15 keyring_file -- keyring/file.sh@104 -- # jq length 00:24:22.338 16:10:16 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:24:22.338 16:10:16 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YFmzflWp7v 00:24:22.338 16:10:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YFmzflWp7v 00:24:22.597 16:10:16 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.0pLaxKEfYB 00:24:22.597 16:10:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.0pLaxKEfYB 00:24:22.855 16:10:16 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:22.855 16:10:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:23.420 nvme0n1 00:24:23.420 16:10:16 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:24:23.420 16:10:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:24:23.678 16:10:17 keyring_file -- keyring/file.sh@112 -- # config='{ 00:24:23.678 "subsystems": [ 00:24:23.678 { 00:24:23.678 "subsystem": "keyring", 00:24:23.678 "config": [ 00:24:23.678 { 00:24:23.678 "method": "keyring_file_add_key", 00:24:23.678 "params": { 00:24:23.678 "name": "key0", 00:24:23.678 "path": "/tmp/tmp.YFmzflWp7v" 00:24:23.678 } 00:24:23.678 }, 00:24:23.678 { 00:24:23.678 "method": "keyring_file_add_key", 00:24:23.678 "params": { 00:24:23.678 "name": "key1", 00:24:23.678 "path": "/tmp/tmp.0pLaxKEfYB" 00:24:23.678 } 00:24:23.678 } 00:24:23.678 ] 00:24:23.678 }, 00:24:23.678 { 00:24:23.678 "subsystem": "iobuf", 00:24:23.678 "config": [ 00:24:23.678 { 00:24:23.678 "method": "iobuf_set_options", 00:24:23.678 "params": { 00:24:23.678 "large_bufsize": 135168, 00:24:23.678 "large_pool_count": 1024, 00:24:23.678 "small_bufsize": 8192, 00:24:23.678 "small_pool_count": 8192 00:24:23.678 } 00:24:23.678 } 00:24:23.678 ] 00:24:23.678 }, 00:24:23.678 { 00:24:23.678 "subsystem": "sock", 00:24:23.678 "config": [ 00:24:23.678 { 00:24:23.678 "method": "sock_set_default_impl", 00:24:23.678 "params": { 00:24:23.678 "impl_name": "posix" 00:24:23.678 } 00:24:23.678 }, 00:24:23.678 { 00:24:23.678 "method": "sock_impl_set_options", 00:24:23.678 "params": { 00:24:23.678 "enable_ktls": false, 00:24:23.678 "enable_placement_id": 0, 00:24:23.678 "enable_quickack": false, 00:24:23.678 "enable_recv_pipe": true, 00:24:23.678 "enable_zerocopy_send_client": false, 00:24:23.678 "enable_zerocopy_send_server": true, 00:24:23.678 "impl_name": "ssl", 00:24:23.678 "recv_buf_size": 4096, 00:24:23.678 "send_buf_size": 4096, 00:24:23.678 "tls_version": 0, 00:24:23.678 "zerocopy_threshold": 0 00:24:23.678 } 00:24:23.678 }, 00:24:23.678 { 00:24:23.678 "method": "sock_impl_set_options", 00:24:23.678 "params": { 00:24:23.678 "enable_ktls": false, 00:24:23.679 "enable_placement_id": 0, 00:24:23.679 "enable_quickack": false, 00:24:23.679 "enable_recv_pipe": true, 00:24:23.679 "enable_zerocopy_send_client": false, 00:24:23.679 "enable_zerocopy_send_server": true, 00:24:23.679 "impl_name": "posix", 00:24:23.679 "recv_buf_size": 2097152, 00:24:23.679 "send_buf_size": 2097152, 00:24:23.679 "tls_version": 0, 00:24:23.679 "zerocopy_threshold": 0 00:24:23.679 } 00:24:23.679 } 00:24:23.679 ] 00:24:23.679 }, 00:24:23.679 { 00:24:23.679 "subsystem": "vmd", 00:24:23.679 "config": [] 00:24:23.679 }, 00:24:23.679 { 00:24:23.679 "subsystem": "accel", 00:24:23.679 "config": [ 00:24:23.679 { 00:24:23.679 "method": "accel_set_options", 00:24:23.679 "params": { 00:24:23.679 "buf_count": 2048, 00:24:23.679 "large_cache_size": 16, 00:24:23.679 "sequence_count": 2048, 00:24:23.679 "small_cache_size": 128, 00:24:23.679 "task_count": 2048 00:24:23.679 } 00:24:23.679 } 00:24:23.679 ] 00:24:23.679 }, 00:24:23.679 { 00:24:23.679 "subsystem": "bdev", 00:24:23.679 "config": [ 00:24:23.679 { 00:24:23.679 "method": "bdev_set_options", 00:24:23.679 "params": { 00:24:23.679 "bdev_auto_examine": true, 00:24:23.679 "bdev_io_cache_size": 256, 00:24:23.679 "bdev_io_pool_size": 65535, 00:24:23.679 "iobuf_large_cache_size": 16, 00:24:23.679 "iobuf_small_cache_size": 128 00:24:23.679 } 00:24:23.679 }, 00:24:23.679 { 00:24:23.679 "method": "bdev_raid_set_options", 00:24:23.679 "params": { 00:24:23.679 "process_window_size_kb": 1024 00:24:23.679 } 00:24:23.679 }, 00:24:23.679 { 00:24:23.679 "method": "bdev_iscsi_set_options", 00:24:23.679 "params": { 00:24:23.679 "timeout_sec": 30 00:24:23.679 } 00:24:23.679 }, 00:24:23.679 { 00:24:23.679 "method": "bdev_nvme_set_options", 00:24:23.679 "params": { 00:24:23.679 "action_on_timeout": "none", 00:24:23.679 "allow_accel_sequence": false, 00:24:23.679 "arbitration_burst": 0, 00:24:23.679 "bdev_retry_count": 3, 00:24:23.679 "ctrlr_loss_timeout_sec": 0, 00:24:23.679 "delay_cmd_submit": true, 00:24:23.679 "dhchap_dhgroups": [ 00:24:23.679 "null", 00:24:23.679 "ffdhe2048", 00:24:23.679 "ffdhe3072", 00:24:23.679 "ffdhe4096", 00:24:23.679 "ffdhe6144", 00:24:23.679 "ffdhe8192" 00:24:23.679 ], 00:24:23.679 "dhchap_digests": [ 00:24:23.679 "sha256", 00:24:23.679 "sha384", 00:24:23.679 "sha512" 00:24:23.679 ], 00:24:23.679 "disable_auto_failback": false, 00:24:23.679 "fast_io_fail_timeout_sec": 0, 00:24:23.679 "generate_uuids": false, 00:24:23.679 "high_priority_weight": 0, 00:24:23.679 "io_path_stat": false, 00:24:23.679 "io_queue_requests": 512, 00:24:23.679 "keep_alive_timeout_ms": 10000, 00:24:23.679 "low_priority_weight": 0, 00:24:23.679 "medium_priority_weight": 0, 00:24:23.679 "nvme_adminq_poll_period_us": 10000, 00:24:23.679 "nvme_error_stat": false, 00:24:23.679 "nvme_ioq_poll_period_us": 0, 00:24:23.679 "rdma_cm_event_timeout_ms": 0, 00:24:23.679 "rdma_max_cq_size": 0, 00:24:23.679 "rdma_srq_size": 0, 00:24:23.679 "reconnect_delay_sec": 0, 00:24:23.679 "timeout_admin_us": 0, 00:24:23.679 "timeout_us": 0, 00:24:23.679 "transport_ack_timeout": 0, 00:24:23.679 "transport_retry_count": 4, 00:24:23.679 "transport_tos": 0 00:24:23.679 } 00:24:23.679 }, 00:24:23.679 { 00:24:23.679 "method": "bdev_nvme_attach_controller", 00:24:23.679 "params": { 00:24:23.679 "adrfam": "IPv4", 00:24:23.679 "ctrlr_loss_timeout_sec": 0, 00:24:23.679 "ddgst": false, 00:24:23.679 "fast_io_fail_timeout_sec": 0, 00:24:23.679 "hdgst": false, 00:24:23.679 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:23.679 "name": "nvme0", 00:24:23.679 "prchk_guard": false, 00:24:23.679 "prchk_reftag": false, 00:24:23.679 "psk": "key0", 00:24:23.679 "reconnect_delay_sec": 0, 00:24:23.679 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:23.679 "traddr": "127.0.0.1", 00:24:23.679 "trsvcid": "4420", 00:24:23.679 "trtype": "TCP" 00:24:23.679 } 00:24:23.679 }, 00:24:23.679 { 00:24:23.679 "method": "bdev_nvme_set_hotplug", 00:24:23.679 "params": { 00:24:23.679 "enable": false, 00:24:23.679 "period_us": 100000 00:24:23.679 } 00:24:23.679 }, 00:24:23.679 { 00:24:23.679 "method": "bdev_wait_for_examine" 00:24:23.679 } 00:24:23.679 ] 00:24:23.679 }, 00:24:23.679 { 00:24:23.679 "subsystem": "nbd", 00:24:23.679 "config": [] 00:24:23.679 } 00:24:23.679 ] 00:24:23.679 }' 00:24:23.679 16:10:17 keyring_file -- keyring/file.sh@114 -- # killprocess 100613 00:24:23.679 16:10:17 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 100613 ']' 00:24:23.679 16:10:17 keyring_file -- common/autotest_common.sh@952 -- # kill -0 100613 00:24:23.679 16:10:17 keyring_file -- common/autotest_common.sh@953 -- # uname 00:24:23.679 16:10:17 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:23.679 16:10:17 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100613 00:24:23.679 killing process with pid 100613 00:24:23.679 Received shutdown signal, test time was about 1.000000 seconds 00:24:23.679 00:24:23.679 Latency(us) 00:24:23.679 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.679 =================================================================================================================== 00:24:23.679 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:23.679 16:10:17 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:23.679 16:10:17 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:23.679 16:10:17 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100613' 00:24:23.679 16:10:17 keyring_file -- common/autotest_common.sh@967 -- # kill 100613 00:24:23.679 16:10:17 keyring_file -- common/autotest_common.sh@972 -- # wait 100613 00:24:23.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:23.938 16:10:17 keyring_file -- keyring/file.sh@117 -- # bperfpid=101089 00:24:23.938 16:10:17 keyring_file -- keyring/file.sh@119 -- # waitforlisten 101089 /var/tmp/bperf.sock 00:24:23.938 16:10:17 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:24:23.938 16:10:17 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 101089 ']' 00:24:23.938 16:10:17 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:23.938 16:10:17 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:23.938 16:10:17 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:24:23.938 "subsystems": [ 00:24:23.938 { 00:24:23.938 "subsystem": "keyring", 00:24:23.938 "config": [ 00:24:23.938 { 00:24:23.938 "method": "keyring_file_add_key", 00:24:23.938 "params": { 00:24:23.938 "name": "key0", 00:24:23.938 "path": "/tmp/tmp.YFmzflWp7v" 00:24:23.938 } 00:24:23.938 }, 00:24:23.938 { 00:24:23.938 "method": "keyring_file_add_key", 00:24:23.938 "params": { 00:24:23.938 "name": "key1", 00:24:23.938 "path": "/tmp/tmp.0pLaxKEfYB" 00:24:23.938 } 00:24:23.938 } 00:24:23.938 ] 00:24:23.938 }, 00:24:23.938 { 00:24:23.938 "subsystem": "iobuf", 00:24:23.938 "config": [ 00:24:23.938 { 00:24:23.938 "method": "iobuf_set_options", 00:24:23.938 "params": { 00:24:23.938 "large_bufsize": 135168, 00:24:23.938 "large_pool_count": 1024, 00:24:23.938 "small_bufsize": 8192, 00:24:23.938 "small_pool_count": 8192 00:24:23.938 } 00:24:23.938 } 00:24:23.938 ] 00:24:23.938 }, 00:24:23.938 { 00:24:23.938 "subsystem": "sock", 00:24:23.938 "config": [ 00:24:23.938 { 00:24:23.938 "method": "sock_set_default_impl", 00:24:23.938 "params": { 00:24:23.938 "impl_name": "posix" 00:24:23.938 } 00:24:23.938 }, 00:24:23.938 { 00:24:23.938 "method": "sock_impl_set_options", 00:24:23.938 "params": { 00:24:23.938 "enable_ktls": false, 00:24:23.938 "enable_placement_id": 0, 00:24:23.938 "enable_quickack": false, 00:24:23.938 "enable_recv_pipe": true, 00:24:23.938 "enable_zerocopy_send_client": false, 00:24:23.938 "enable_zerocopy_send_server": true, 00:24:23.938 "impl_name": "ssl", 00:24:23.938 "recv_buf_size": 4096, 00:24:23.938 "send_buf_size": 4096, 00:24:23.938 "tls_version": 0, 00:24:23.938 "zerocopy_threshold": 0 00:24:23.938 } 00:24:23.938 }, 00:24:23.938 { 00:24:23.938 "method": "sock_impl_set_options", 00:24:23.938 "params": { 00:24:23.938 "enable_ktls": false, 00:24:23.938 "enable_placement_id": 0, 00:24:23.938 "enable_quickack": false, 00:24:23.938 "enable_recv_pipe": true, 00:24:23.938 "enable_zerocopy_send_client": false, 00:24:23.938 "enable_zerocopy_send_server": true, 00:24:23.938 "impl_name": "posix", 00:24:23.938 "recv_buf_size": 2097152, 00:24:23.938 "send_buf_size": 2097152, 00:24:23.938 "tls_version": 0, 00:24:23.938 "zerocopy_threshold": 0 00:24:23.938 } 00:24:23.938 } 00:24:23.938 ] 00:24:23.938 }, 00:24:23.938 { 00:24:23.938 "subsystem": "vmd", 00:24:23.938 "config": [] 00:24:23.938 }, 00:24:23.938 { 00:24:23.938 "subsystem": "accel", 00:24:23.938 "config": [ 00:24:23.938 { 00:24:23.938 "method": "accel_set_options", 00:24:23.938 "params": { 00:24:23.938 "buf_count": 2048, 00:24:23.938 "large_cache_size": 16, 00:24:23.938 "sequence_count": 2048, 00:24:23.938 "small_cache_size": 128, 00:24:23.938 "task_count": 2048 00:24:23.938 } 00:24:23.938 } 00:24:23.938 ] 00:24:23.938 }, 00:24:23.938 { 00:24:23.938 "subsystem": "bdev", 00:24:23.938 "config": [ 00:24:23.938 { 00:24:23.938 "method": "bdev_set_options", 00:24:23.939 "params": { 00:24:23.939 "bdev_auto_examine": true, 00:24:23.939 "bdev_io_cache_size": 256, 00:24:23.939 "bdev_io_pool_size": 65535, 00:24:23.939 "iobuf_large_cache_size": 16, 00:24:23.939 "iobuf_small_cache_size": 128 00:24:23.939 } 00:24:23.939 }, 00:24:23.939 { 00:24:23.939 "method": "bdev_raid_set_options", 00:24:23.939 "params": { 00:24:23.939 "process_window_size_kb": 1024 00:24:23.939 } 00:24:23.939 }, 00:24:23.939 { 00:24:23.939 "method": "bdev_iscsi_set_options", 00:24:23.939 "params": { 00:24:23.939 "timeout_sec": 30 00:24:23.939 } 00:24:23.939 }, 00:24:23.939 { 00:24:23.939 "method": "bdev_nvme_set_options", 00:24:23.939 "params": { 00:24:23.939 "action_on_timeout": "none", 00:24:23.939 "allow_accel_sequence": false, 00:24:23.939 "arbitration_burst": 0, 00:24:23.939 "bdev_retry_count": 3, 00:24:23.939 "ctrlr_loss_timeout_sec": 0, 00:24:23.939 "delay_cmd_submit": true, 00:24:23.939 "dhchap_dhgroups": [ 00:24:23.939 "null", 00:24:23.939 "ffdhe2048", 00:24:23.939 "ffdhe3072", 00:24:23.939 "ffdhe4096", 00:24:23.939 "ffdhe6144", 00:24:23.939 "ffdhe8192" 00:24:23.939 ], 00:24:23.939 "dhchap_digests": [ 00:24:23.939 "sha256", 00:24:23.939 "sha384", 00:24:23.939 "sha512" 00:24:23.939 ], 00:24:23.939 "disable_auto_failback": false, 00:24:23.939 "fast_io_fail_timeout_sec": 0, 00:24:23.939 "generate_uuids": false, 00:24:23.939 "high_priority_weight": 0, 00:24:23.939 "io_path_stat": false, 00:24:23.939 "io_queue_requests": 512, 00:24:23.939 "keep_alive_timeout_ms": 10000, 00:24:23.939 "low_priority_weight": 0, 00:24:23.939 "medium_priority_weight": 0, 00:24:23.939 "nvme_adminq_poll_period_us": 10000, 00:24:23.939 "nvme_error_stat": false, 00:24:23.939 "nvme_ioq_poll_period_us": 0, 00:24:23.939 "rdma_cm_event_timeout_ms": 0, 00:24:23.939 "rdma_max_cq_size": 0, 00:24:23.939 "rdma_srq_size": 0, 00:24:23.939 "reconnect_delay_sec": 0, 00:24:23.939 "timeout_admin_us": 0, 00:24:23.939 "timeout_us": 0, 00:24:23.939 "transport_ack_timeout": 0, 00:24:23.939 "transport_retry_count": 4, 00:24:23.939 "transport_tos": 0 00:24:23.939 } 00:24:23.939 }, 00:24:23.939 { 00:24:23.939 "method": "bdev_nvme_attach_controller", 00:24:23.939 "params": { 00:24:23.939 "adrfam": "IPv4", 00:24:23.939 "ctrlr_loss_timeout_sec": 0, 00:24:23.939 "ddgst": false, 00:24:23.939 "fast_io_fail_timeout_sec": 0, 00:24:23.939 "hdgst": false, 00:24:23.939 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:23.939 "name": "nvme0", 00:24:23.939 "prchk_guard": false, 00:24:23.939 "prchk_reftag": false, 00:24:23.939 "psk": "key0", 00:24:23.939 "reconnect_delay_sec": 0, 00:24:23.939 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:23.939 "traddr": "127.0.0.1", 00:24:23.939 "trsvcid": "4420", 00:24:23.939 "trtype": "TCP" 00:24:23.939 } 00:24:23.939 }, 00:24:23.939 { 00:24:23.939 "method": "bdev_nvme_set_hotplug", 00:24:23.939 "params": { 00:24:23.939 "enable": false, 00:24:23.939 "period_us": 100000 00:24:23.939 } 00:24:23.939 }, 00:24:23.939 { 00:24:23.939 "method": "bdev_wait_for_examine" 00:24:23.939 } 00:24:23.939 ] 00:24:23.939 }, 00:24:23.939 { 00:24:23.939 "subsystem": "nbd", 00:24:23.939 "config": [] 00:24:23.939 } 00:24:23.939 ] 00:24:23.939 }' 00:24:23.939 16:10:17 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:23.939 16:10:17 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:23.939 16:10:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:23.939 [2024-07-15 16:10:17.514932] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:24:23.939 [2024-07-15 16:10:17.516114] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101089 ] 00:24:23.939 [2024-07-15 16:10:17.646468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.197 [2024-07-15 16:10:17.753590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:24.454 [2024-07-15 16:10:17.934571] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:25.020 16:10:18 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:25.020 16:10:18 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:24:25.020 16:10:18 keyring_file -- keyring/file.sh@120 -- # jq length 00:24:25.020 16:10:18 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:24:25.020 16:10:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:25.278 16:10:18 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:24:25.278 16:10:18 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:24:25.278 16:10:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:25.278 16:10:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:25.278 16:10:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:25.278 16:10:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:25.278 16:10:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:25.536 16:10:19 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:24:25.536 16:10:19 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:24:25.536 16:10:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:25.536 16:10:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:25.536 16:10:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:25.536 16:10:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:25.536 16:10:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:25.794 16:10:19 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:24:25.794 16:10:19 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:24:25.794 16:10:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:24:25.794 16:10:19 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:24:26.115 16:10:19 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:24:26.115 16:10:19 keyring_file -- keyring/file.sh@1 -- # cleanup 00:24:26.115 16:10:19 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.YFmzflWp7v /tmp/tmp.0pLaxKEfYB 00:24:26.115 16:10:19 keyring_file -- keyring/file.sh@20 -- # killprocess 101089 00:24:26.115 16:10:19 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 101089 ']' 00:24:26.115 16:10:19 keyring_file -- common/autotest_common.sh@952 -- # kill -0 101089 00:24:26.115 16:10:19 keyring_file -- common/autotest_common.sh@953 -- # uname 00:24:26.115 16:10:19 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:26.115 16:10:19 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101089 00:24:26.115 killing process with pid 101089 00:24:26.115 Received shutdown signal, test time was about 1.000000 seconds 00:24:26.115 00:24:26.115 Latency(us) 00:24:26.115 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.115 =================================================================================================================== 00:24:26.115 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:26.115 16:10:19 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:26.115 16:10:19 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:26.115 16:10:19 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101089' 00:24:26.115 16:10:19 keyring_file -- common/autotest_common.sh@967 -- # kill 101089 00:24:26.115 16:10:19 keyring_file -- common/autotest_common.sh@972 -- # wait 101089 00:24:26.373 16:10:19 keyring_file -- keyring/file.sh@21 -- # killprocess 100578 00:24:26.373 16:10:19 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 100578 ']' 00:24:26.373 16:10:19 keyring_file -- common/autotest_common.sh@952 -- # kill -0 100578 00:24:26.373 16:10:19 keyring_file -- common/autotest_common.sh@953 -- # uname 00:24:26.373 16:10:19 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:26.373 16:10:19 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100578 00:24:26.373 killing process with pid 100578 00:24:26.373 16:10:19 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:26.373 16:10:19 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:26.373 16:10:19 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100578' 00:24:26.373 16:10:19 keyring_file -- common/autotest_common.sh@967 -- # kill 100578 00:24:26.373 [2024-07-15 16:10:19.862956] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:26.373 16:10:19 keyring_file -- common/autotest_common.sh@972 -- # wait 100578 00:24:26.632 00:24:26.632 real 0m16.224s 00:24:26.632 user 0m40.411s 00:24:26.632 sys 0m3.294s 00:24:26.632 16:10:20 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:26.632 16:10:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:26.632 ************************************ 00:24:26.632 END TEST keyring_file 00:24:26.632 ************************************ 00:24:26.632 16:10:20 -- common/autotest_common.sh@1142 -- # return 0 00:24:26.632 16:10:20 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:24:26.632 16:10:20 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:24:26.632 16:10:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:26.632 16:10:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:26.632 16:10:20 -- common/autotest_common.sh@10 -- # set +x 00:24:26.632 ************************************ 00:24:26.632 START TEST keyring_linux 00:24:26.632 ************************************ 00:24:26.632 16:10:20 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:24:26.890 * Looking for test storage... 00:24:26.890 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:24:26.890 16:10:20 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:24:26.890 16:10:20 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:26.890 16:10:20 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:24:26.890 16:10:20 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:26.890 16:10:20 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:26.890 16:10:20 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:26.890 16:10:20 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:26.890 16:10:20 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:26.890 16:10:20 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:26.890 16:10:20 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:26.890 16:10:20 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:26.890 16:10:20 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:26.890 16:10:20 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:26.890 16:10:20 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a185c444-aaeb-4d13-aa60-df1b0266600d 00:24:26.890 16:10:20 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=a185c444-aaeb-4d13-aa60-df1b0266600d 00:24:26.890 16:10:20 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:26.890 16:10:20 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:26.890 16:10:20 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:26.890 16:10:20 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:26.890 16:10:20 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:26.890 16:10:20 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:26.890 16:10:20 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:26.890 16:10:20 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:26.890 16:10:20 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.890 16:10:20 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.890 16:10:20 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.890 16:10:20 keyring_linux -- paths/export.sh@5 -- # export PATH 00:24:26.890 16:10:20 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.890 16:10:20 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:24:26.890 16:10:20 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:26.890 16:10:20 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:26.890 16:10:20 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:26.890 16:10:20 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:26.890 16:10:20 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:26.890 16:10:20 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:26.890 16:10:20 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:26.890 16:10:20 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:26.890 16:10:20 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:26.890 16:10:20 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:26.890 16:10:20 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:26.890 16:10:20 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:24:26.890 16:10:20 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:24:26.890 16:10:20 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:24:26.890 16:10:20 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:24:26.890 16:10:20 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:24:26.890 16:10:20 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:24:26.890 16:10:20 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:26.890 16:10:20 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:24:26.890 16:10:20 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:24:26.890 16:10:20 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:26.890 16:10:20 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:26.890 16:10:20 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:24:26.890 16:10:20 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:26.890 16:10:20 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:24:26.890 16:10:20 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:24:26.890 16:10:20 keyring_linux -- nvmf/common.sh@705 -- # python - 00:24:26.890 16:10:20 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:24:26.890 /tmp/:spdk-test:key0 00:24:26.890 16:10:20 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:24:26.890 16:10:20 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:24:26.890 16:10:20 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:24:26.890 16:10:20 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:24:26.890 16:10:20 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:26.890 16:10:20 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:24:26.890 16:10:20 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:24:26.890 16:10:20 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:26.890 16:10:20 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:26.890 16:10:20 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:24:26.890 16:10:20 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:26.890 16:10:20 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:24:26.890 16:10:20 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:24:26.890 16:10:20 keyring_linux -- nvmf/common.sh@705 -- # python - 00:24:26.890 16:10:20 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:24:26.890 /tmp/:spdk-test:key1 00:24:26.890 16:10:20 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:24:26.890 16:10:20 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=101239 00:24:26.890 16:10:20 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:26.890 16:10:20 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 101239 00:24:26.890 16:10:20 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 101239 ']' 00:24:26.890 16:10:20 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.890 16:10:20 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:26.890 16:10:20 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.890 16:10:20 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:26.890 16:10:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:26.890 [2024-07-15 16:10:20.594907] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:24:26.890 [2024-07-15 16:10:20.595037] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101239 ] 00:24:27.148 [2024-07-15 16:10:20.737614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.148 [2024-07-15 16:10:20.870393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.119 16:10:21 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:28.119 16:10:21 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:24:28.119 16:10:21 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:24:28.119 16:10:21 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.119 16:10:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:28.119 [2024-07-15 16:10:21.563220] tcp.c: 701:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:28.119 null0 00:24:28.119 [2024-07-15 16:10:21.595177] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:28.119 [2024-07-15 16:10:21.595405] tcp.c:1016:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:28.119 16:10:21 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.119 16:10:21 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:24:28.119 149928194 00:24:28.119 16:10:21 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:24:28.119 144587310 00:24:28.119 16:10:21 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:24:28.119 16:10:21 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=101275 00:24:28.119 16:10:21 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 101275 /var/tmp/bperf.sock 00:24:28.119 16:10:21 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 101275 ']' 00:24:28.119 16:10:21 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:28.119 16:10:21 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:28.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:28.119 16:10:21 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:28.119 16:10:21 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:28.119 16:10:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:28.119 [2024-07-15 16:10:21.673156] Starting SPDK v24.09-pre git sha1 2f3522da7 / DPDK 24.03.0 initialization... 00:24:28.119 [2024-07-15 16:10:21.673237] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101275 ] 00:24:28.119 [2024-07-15 16:10:21.809572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.377 [2024-07-15 16:10:21.935898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:29.310 16:10:22 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:29.310 16:10:22 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:24:29.310 16:10:22 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:24:29.310 16:10:22 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:24:29.310 16:10:22 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:24:29.310 16:10:22 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:29.876 16:10:23 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:24:29.876 16:10:23 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:24:29.876 [2024-07-15 16:10:23.539841] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:30.134 nvme0n1 00:24:30.134 16:10:23 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:24:30.134 16:10:23 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:24:30.134 16:10:23 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:24:30.134 16:10:23 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:24:30.134 16:10:23 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:24:30.134 16:10:23 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:30.406 16:10:23 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:24:30.406 16:10:23 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:24:30.406 16:10:23 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:24:30.406 16:10:23 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:24:30.406 16:10:23 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:24:30.406 16:10:23 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:30.406 16:10:23 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:30.694 16:10:24 keyring_linux -- keyring/linux.sh@25 -- # sn=149928194 00:24:30.694 16:10:24 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:24:30.694 16:10:24 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:24:30.694 16:10:24 keyring_linux -- keyring/linux.sh@26 -- # [[ 149928194 == \1\4\9\9\2\8\1\9\4 ]] 00:24:30.694 16:10:24 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 149928194 00:24:30.694 16:10:24 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:24:30.694 16:10:24 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:30.694 Running I/O for 1 seconds... 00:24:32.069 00:24:32.069 Latency(us) 00:24:32.069 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.069 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:32.069 nvme0n1 : 1.01 10540.30 41.17 0.00 0.00 12060.83 3142.75 13405.09 00:24:32.069 =================================================================================================================== 00:24:32.069 Total : 10540.30 41.17 0.00 0.00 12060.83 3142.75 13405.09 00:24:32.069 0 00:24:32.069 16:10:25 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:32.069 16:10:25 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:32.069 16:10:25 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:24:32.069 16:10:25 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:24:32.069 16:10:25 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:24:32.069 16:10:25 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:24:32.069 16:10:25 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:32.069 16:10:25 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:24:32.328 16:10:26 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:24:32.328 16:10:26 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:24:32.328 16:10:26 keyring_linux -- keyring/linux.sh@23 -- # return 00:24:32.328 16:10:26 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:32.328 16:10:26 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:24:32.328 16:10:26 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:32.328 16:10:26 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:24:32.328 16:10:26 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:32.328 16:10:26 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:24:32.328 16:10:26 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:32.328 16:10:26 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:32.328 16:10:26 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:32.586 [2024-07-15 16:10:26.289314] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:32.586 [2024-07-15 16:10:26.289969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb78ea0 (107): Transport endpoint is not connected 00:24:32.586 [2024-07-15 16:10:26.290945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb78ea0 (9): Bad file descriptor 00:24:32.586 [2024-07-15 16:10:26.291943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:32.586 [2024-07-15 16:10:26.291972] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:24:32.586 [2024-07-15 16:10:26.291983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:32.586 2024/07/15 16:10:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:24:32.586 request: 00:24:32.586 { 00:24:32.586 "method": "bdev_nvme_attach_controller", 00:24:32.586 "params": { 00:24:32.586 "name": "nvme0", 00:24:32.586 "trtype": "tcp", 00:24:32.586 "traddr": "127.0.0.1", 00:24:32.586 "adrfam": "ipv4", 00:24:32.586 "trsvcid": "4420", 00:24:32.586 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:32.586 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:32.586 "prchk_reftag": false, 00:24:32.586 "prchk_guard": false, 00:24:32.586 "hdgst": false, 00:24:32.586 "ddgst": false, 00:24:32.586 "psk": ":spdk-test:key1" 00:24:32.586 } 00:24:32.586 } 00:24:32.586 Got JSON-RPC error response 00:24:32.586 GoRPCClient: error on JSON-RPC call 00:24:32.586 16:10:26 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:24:32.586 16:10:26 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:32.586 16:10:26 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:32.586 16:10:26 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:32.586 16:10:26 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:24:32.586 16:10:26 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:24:32.586 16:10:26 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:24:32.586 16:10:26 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:24:32.845 16:10:26 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:24:32.845 16:10:26 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:24:32.845 16:10:26 keyring_linux -- keyring/linux.sh@33 -- # sn=149928194 00:24:32.845 16:10:26 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 149928194 00:24:32.845 1 links removed 00:24:32.845 16:10:26 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:24:32.845 16:10:26 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:24:32.845 16:10:26 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:24:32.845 16:10:26 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:24:32.845 16:10:26 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:24:32.845 16:10:26 keyring_linux -- keyring/linux.sh@33 -- # sn=144587310 00:24:32.845 16:10:26 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 144587310 00:24:32.845 1 links removed 00:24:32.845 16:10:26 keyring_linux -- keyring/linux.sh@41 -- # killprocess 101275 00:24:32.845 16:10:26 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 101275 ']' 00:24:32.845 16:10:26 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 101275 00:24:32.845 16:10:26 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:24:32.845 16:10:26 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:32.845 16:10:26 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101275 00:24:32.845 killing process with pid 101275 00:24:32.845 Received shutdown signal, test time was about 1.000000 seconds 00:24:32.845 00:24:32.845 Latency(us) 00:24:32.845 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.845 =================================================================================================================== 00:24:32.845 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:32.846 16:10:26 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:32.846 16:10:26 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:32.846 16:10:26 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101275' 00:24:32.846 16:10:26 keyring_linux -- common/autotest_common.sh@967 -- # kill 101275 00:24:32.846 16:10:26 keyring_linux -- common/autotest_common.sh@972 -- # wait 101275 00:24:33.104 16:10:26 keyring_linux -- keyring/linux.sh@42 -- # killprocess 101239 00:24:33.104 16:10:26 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 101239 ']' 00:24:33.104 16:10:26 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 101239 00:24:33.104 16:10:26 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:24:33.104 16:10:26 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:33.104 16:10:26 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101239 00:24:33.104 killing process with pid 101239 00:24:33.104 16:10:26 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:33.104 16:10:26 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:33.104 16:10:26 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101239' 00:24:33.104 16:10:26 keyring_linux -- common/autotest_common.sh@967 -- # kill 101239 00:24:33.104 16:10:26 keyring_linux -- common/autotest_common.sh@972 -- # wait 101239 00:24:33.362 00:24:33.362 real 0m6.698s 00:24:33.362 user 0m13.167s 00:24:33.362 sys 0m1.712s 00:24:33.362 16:10:27 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:33.362 16:10:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:33.362 ************************************ 00:24:33.362 END TEST keyring_linux 00:24:33.362 ************************************ 00:24:33.362 16:10:27 -- common/autotest_common.sh@1142 -- # return 0 00:24:33.362 16:10:27 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:24:33.362 16:10:27 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:24:33.362 16:10:27 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:24:33.362 16:10:27 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:24:33.362 16:10:27 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:24:33.362 16:10:27 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:24:33.362 16:10:27 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:24:33.362 16:10:27 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:24:33.362 16:10:27 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:24:33.362 16:10:27 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:24:33.362 16:10:27 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:24:33.362 16:10:27 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:24:33.362 16:10:27 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:24:33.362 16:10:27 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:24:33.362 16:10:27 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:24:33.362 16:10:27 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:24:33.362 16:10:27 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:24:33.362 16:10:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:33.362 16:10:27 -- common/autotest_common.sh@10 -- # set +x 00:24:33.362 16:10:27 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:24:33.362 16:10:27 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:24:33.362 16:10:27 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:24:33.362 16:10:27 -- common/autotest_common.sh@10 -- # set +x 00:24:35.261 INFO: APP EXITING 00:24:35.261 INFO: killing all VMs 00:24:35.261 INFO: killing vhost app 00:24:35.261 INFO: EXIT DONE 00:24:35.520 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:35.520 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:24:35.778 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:24:36.344 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:36.344 Cleaning 00:24:36.345 Removing: /var/run/dpdk/spdk0/config 00:24:36.345 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:24:36.345 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:24:36.345 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:24:36.345 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:24:36.345 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:24:36.345 Removing: /var/run/dpdk/spdk0/hugepage_info 00:24:36.345 Removing: /var/run/dpdk/spdk1/config 00:24:36.345 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:24:36.345 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:24:36.345 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:24:36.345 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:24:36.345 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:24:36.345 Removing: /var/run/dpdk/spdk1/hugepage_info 00:24:36.345 Removing: /var/run/dpdk/spdk2/config 00:24:36.345 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:24:36.345 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:24:36.345 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:24:36.345 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:24:36.345 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:24:36.345 Removing: /var/run/dpdk/spdk2/hugepage_info 00:24:36.345 Removing: /var/run/dpdk/spdk3/config 00:24:36.345 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:24:36.345 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:24:36.345 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:24:36.345 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:24:36.345 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:24:36.345 Removing: /var/run/dpdk/spdk3/hugepage_info 00:24:36.345 Removing: /var/run/dpdk/spdk4/config 00:24:36.345 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:24:36.345 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:24:36.345 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:24:36.345 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:24:36.345 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:24:36.603 Removing: /var/run/dpdk/spdk4/hugepage_info 00:24:36.603 Removing: /dev/shm/nvmf_trace.0 00:24:36.603 Removing: /dev/shm/spdk_tgt_trace.pid60657 00:24:36.603 Removing: /var/run/dpdk/spdk0 00:24:36.603 Removing: /var/run/dpdk/spdk1 00:24:36.603 Removing: /var/run/dpdk/spdk2 00:24:36.603 Removing: /var/run/dpdk/spdk3 00:24:36.603 Removing: /var/run/dpdk/spdk4 00:24:36.603 Removing: /var/run/dpdk/spdk_pid100086 00:24:36.603 Removing: /var/run/dpdk/spdk_pid100121 00:24:36.603 Removing: /var/run/dpdk/spdk_pid100153 00:24:36.603 Removing: /var/run/dpdk/spdk_pid100578 00:24:36.603 Removing: /var/run/dpdk/spdk_pid100613 00:24:36.603 Removing: /var/run/dpdk/spdk_pid101089 00:24:36.603 Removing: /var/run/dpdk/spdk_pid101239 00:24:36.603 Removing: /var/run/dpdk/spdk_pid101275 00:24:36.603 Removing: /var/run/dpdk/spdk_pid60506 00:24:36.603 Removing: /var/run/dpdk/spdk_pid60657 00:24:36.603 Removing: /var/run/dpdk/spdk_pid60918 00:24:36.603 Removing: /var/run/dpdk/spdk_pid61011 00:24:36.603 Removing: /var/run/dpdk/spdk_pid61050 00:24:36.603 Removing: /var/run/dpdk/spdk_pid61165 00:24:36.603 Removing: /var/run/dpdk/spdk_pid61195 00:24:36.603 Removing: /var/run/dpdk/spdk_pid61319 00:24:36.603 Removing: /var/run/dpdk/spdk_pid61593 00:24:36.603 Removing: /var/run/dpdk/spdk_pid61769 00:24:36.603 Removing: /var/run/dpdk/spdk_pid61846 00:24:36.603 Removing: /var/run/dpdk/spdk_pid61938 00:24:36.603 Removing: /var/run/dpdk/spdk_pid62027 00:24:36.603 Removing: /var/run/dpdk/spdk_pid62066 00:24:36.603 Removing: /var/run/dpdk/spdk_pid62096 00:24:36.603 Removing: /var/run/dpdk/spdk_pid62163 00:24:36.603 Removing: /var/run/dpdk/spdk_pid62275 00:24:36.603 Removing: /var/run/dpdk/spdk_pid62916 00:24:36.603 Removing: /var/run/dpdk/spdk_pid62980 00:24:36.603 Removing: /var/run/dpdk/spdk_pid63049 00:24:36.603 Removing: /var/run/dpdk/spdk_pid63077 00:24:36.603 Removing: /var/run/dpdk/spdk_pid63156 00:24:36.603 Removing: /var/run/dpdk/spdk_pid63184 00:24:36.603 Removing: /var/run/dpdk/spdk_pid63269 00:24:36.603 Removing: /var/run/dpdk/spdk_pid63297 00:24:36.603 Removing: /var/run/dpdk/spdk_pid63348 00:24:36.603 Removing: /var/run/dpdk/spdk_pid63383 00:24:36.603 Removing: /var/run/dpdk/spdk_pid63430 00:24:36.603 Removing: /var/run/dpdk/spdk_pid63460 00:24:36.603 Removing: /var/run/dpdk/spdk_pid63612 00:24:36.603 Removing: /var/run/dpdk/spdk_pid63642 00:24:36.603 Removing: /var/run/dpdk/spdk_pid63722 00:24:36.603 Removing: /var/run/dpdk/spdk_pid63786 00:24:36.603 Removing: /var/run/dpdk/spdk_pid63816 00:24:36.603 Removing: /var/run/dpdk/spdk_pid63869 00:24:36.603 Removing: /var/run/dpdk/spdk_pid63909 00:24:36.603 Removing: /var/run/dpdk/spdk_pid63938 00:24:36.603 Removing: /var/run/dpdk/spdk_pid63978 00:24:36.603 Removing: /var/run/dpdk/spdk_pid64013 00:24:36.603 Removing: /var/run/dpdk/spdk_pid64047 00:24:36.603 Removing: /var/run/dpdk/spdk_pid64082 00:24:36.603 Removing: /var/run/dpdk/spdk_pid64116 00:24:36.603 Removing: /var/run/dpdk/spdk_pid64153 00:24:36.603 Removing: /var/run/dpdk/spdk_pid64187 00:24:36.603 Removing: /var/run/dpdk/spdk_pid64222 00:24:36.603 Removing: /var/run/dpdk/spdk_pid64256 00:24:36.603 Removing: /var/run/dpdk/spdk_pid64295 00:24:36.603 Removing: /var/run/dpdk/spdk_pid64331 00:24:36.603 Removing: /var/run/dpdk/spdk_pid64368 00:24:36.603 Removing: /var/run/dpdk/spdk_pid64408 00:24:36.603 Removing: /var/run/dpdk/spdk_pid64443 00:24:36.603 Removing: /var/run/dpdk/spdk_pid64486 00:24:36.603 Removing: /var/run/dpdk/spdk_pid64518 00:24:36.603 Removing: /var/run/dpdk/spdk_pid64559 00:24:36.603 Removing: /var/run/dpdk/spdk_pid64600 00:24:36.603 Removing: /var/run/dpdk/spdk_pid64670 00:24:36.603 Removing: /var/run/dpdk/spdk_pid64781 00:24:36.603 Removing: /var/run/dpdk/spdk_pid65194 00:24:36.603 Removing: /var/run/dpdk/spdk_pid68568 00:24:36.603 Removing: /var/run/dpdk/spdk_pid68918 00:24:36.603 Removing: /var/run/dpdk/spdk_pid71326 00:24:36.603 Removing: /var/run/dpdk/spdk_pid71702 00:24:36.603 Removing: /var/run/dpdk/spdk_pid71963 00:24:36.603 Removing: /var/run/dpdk/spdk_pid72014 00:24:36.603 Removing: /var/run/dpdk/spdk_pid72629 00:24:36.603 Removing: /var/run/dpdk/spdk_pid73068 00:24:36.603 Removing: /var/run/dpdk/spdk_pid73118 00:24:36.603 Removing: /var/run/dpdk/spdk_pid73485 00:24:36.603 Removing: /var/run/dpdk/spdk_pid74009 00:24:36.603 Removing: /var/run/dpdk/spdk_pid74461 00:24:36.603 Removing: /var/run/dpdk/spdk_pid75439 00:24:36.603 Removing: /var/run/dpdk/spdk_pid76431 00:24:36.603 Removing: /var/run/dpdk/spdk_pid76550 00:24:36.603 Removing: /var/run/dpdk/spdk_pid76618 00:24:36.603 Removing: /var/run/dpdk/spdk_pid78087 00:24:36.603 Removing: /var/run/dpdk/spdk_pid78308 00:24:36.861 Removing: /var/run/dpdk/spdk_pid83660 00:24:36.861 Removing: /var/run/dpdk/spdk_pid84103 00:24:36.861 Removing: /var/run/dpdk/spdk_pid84208 00:24:36.861 Removing: /var/run/dpdk/spdk_pid84359 00:24:36.861 Removing: /var/run/dpdk/spdk_pid84405 00:24:36.861 Removing: /var/run/dpdk/spdk_pid84449 00:24:36.861 Removing: /var/run/dpdk/spdk_pid84496 00:24:36.861 Removing: /var/run/dpdk/spdk_pid84654 00:24:36.861 Removing: /var/run/dpdk/spdk_pid84807 00:24:36.861 Removing: /var/run/dpdk/spdk_pid85081 00:24:36.861 Removing: /var/run/dpdk/spdk_pid85205 00:24:36.861 Removing: /var/run/dpdk/spdk_pid85459 00:24:36.861 Removing: /var/run/dpdk/spdk_pid85584 00:24:36.861 Removing: /var/run/dpdk/spdk_pid85719 00:24:36.862 Removing: /var/run/dpdk/spdk_pid86054 00:24:36.862 Removing: /var/run/dpdk/spdk_pid86475 00:24:36.862 Removing: /var/run/dpdk/spdk_pid86789 00:24:36.862 Removing: /var/run/dpdk/spdk_pid87289 00:24:36.862 Removing: /var/run/dpdk/spdk_pid87297 00:24:36.862 Removing: /var/run/dpdk/spdk_pid87630 00:24:36.862 Removing: /var/run/dpdk/spdk_pid87650 00:24:36.862 Removing: /var/run/dpdk/spdk_pid87664 00:24:36.862 Removing: /var/run/dpdk/spdk_pid87699 00:24:36.862 Removing: /var/run/dpdk/spdk_pid87706 00:24:36.862 Removing: /var/run/dpdk/spdk_pid88061 00:24:36.862 Removing: /var/run/dpdk/spdk_pid88104 00:24:36.862 Removing: /var/run/dpdk/spdk_pid88442 00:24:36.862 Removing: /var/run/dpdk/spdk_pid88689 00:24:36.862 Removing: /var/run/dpdk/spdk_pid89172 00:24:36.862 Removing: /var/run/dpdk/spdk_pid89753 00:24:36.862 Removing: /var/run/dpdk/spdk_pid91109 00:24:36.862 Removing: /var/run/dpdk/spdk_pid91695 00:24:36.862 Removing: /var/run/dpdk/spdk_pid91707 00:24:36.862 Removing: /var/run/dpdk/spdk_pid93639 00:24:36.862 Removing: /var/run/dpdk/spdk_pid93725 00:24:36.862 Removing: /var/run/dpdk/spdk_pid93814 00:24:36.862 Removing: /var/run/dpdk/spdk_pid93906 00:24:36.862 Removing: /var/run/dpdk/spdk_pid94063 00:24:36.862 Removing: /var/run/dpdk/spdk_pid94154 00:24:36.862 Removing: /var/run/dpdk/spdk_pid94244 00:24:36.862 Removing: /var/run/dpdk/spdk_pid94329 00:24:36.862 Removing: /var/run/dpdk/spdk_pid94675 00:24:36.862 Removing: /var/run/dpdk/spdk_pid95360 00:24:36.862 Removing: /var/run/dpdk/spdk_pid96717 00:24:36.862 Removing: /var/run/dpdk/spdk_pid96918 00:24:36.862 Removing: /var/run/dpdk/spdk_pid97209 00:24:36.862 Removing: /var/run/dpdk/spdk_pid97508 00:24:36.862 Removing: /var/run/dpdk/spdk_pid98052 00:24:36.862 Removing: /var/run/dpdk/spdk_pid98057 00:24:36.862 Removing: /var/run/dpdk/spdk_pid98424 00:24:36.862 Removing: /var/run/dpdk/spdk_pid98577 00:24:36.862 Removing: /var/run/dpdk/spdk_pid98734 00:24:36.862 Removing: /var/run/dpdk/spdk_pid98831 00:24:36.862 Removing: /var/run/dpdk/spdk_pid98986 00:24:36.862 Removing: /var/run/dpdk/spdk_pid99095 00:24:36.862 Removing: /var/run/dpdk/spdk_pid99767 00:24:36.862 Removing: /var/run/dpdk/spdk_pid99802 00:24:36.862 Removing: /var/run/dpdk/spdk_pid99836 00:24:36.862 Clean 00:24:36.862 16:10:30 -- common/autotest_common.sh@1451 -- # return 0 00:24:36.862 16:10:30 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:24:36.862 16:10:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:36.862 16:10:30 -- common/autotest_common.sh@10 -- # set +x 00:24:37.120 16:10:30 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:24:37.120 16:10:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:37.121 16:10:30 -- common/autotest_common.sh@10 -- # set +x 00:24:37.121 16:10:30 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:37.121 16:10:30 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:24:37.121 16:10:30 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:24:37.121 16:10:30 -- spdk/autotest.sh@391 -- # hash lcov 00:24:37.121 16:10:30 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:24:37.121 16:10:30 -- spdk/autotest.sh@393 -- # hostname 00:24:37.121 16:10:30 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:24:37.379 geninfo: WARNING: invalid characters removed from testname! 00:25:09.441 16:10:58 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:09.441 16:11:02 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:11.345 16:11:05 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:14.627 16:11:07 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:17.910 16:11:11 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:20.434 16:11:13 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:22.961 16:11:16 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:22.961 16:11:16 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:22.961 16:11:16 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:25:22.961 16:11:16 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:22.961 16:11:16 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:22.961 16:11:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.961 16:11:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.961 16:11:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.961 16:11:16 -- paths/export.sh@5 -- $ export PATH 00:25:22.961 16:11:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.961 16:11:16 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:25:22.961 16:11:16 -- common/autobuild_common.sh@444 -- $ date +%s 00:25:22.961 16:11:16 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721059876.XXXXXX 00:25:22.961 16:11:16 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721059876.ygU5PY 00:25:22.961 16:11:16 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:25:22.961 16:11:16 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:25:22.961 16:11:16 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:25:22.961 16:11:16 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:25:22.961 16:11:16 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:25:22.961 16:11:16 -- common/autobuild_common.sh@460 -- $ get_config_params 00:25:22.961 16:11:16 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:25:22.961 16:11:16 -- common/autotest_common.sh@10 -- $ set +x 00:25:22.961 16:11:16 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:25:22.961 16:11:16 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:25:22.961 16:11:16 -- pm/common@17 -- $ local monitor 00:25:22.961 16:11:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:22.961 16:11:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:22.961 16:11:16 -- pm/common@25 -- $ sleep 1 00:25:22.961 16:11:16 -- pm/common@21 -- $ date +%s 00:25:22.961 16:11:16 -- pm/common@21 -- $ date +%s 00:25:22.961 16:11:16 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721059876 00:25:22.961 16:11:16 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721059876 00:25:23.218 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721059876_collect-vmstat.pm.log 00:25:23.218 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721059876_collect-cpu-load.pm.log 00:25:24.150 16:11:17 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:25:24.150 16:11:17 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:25:24.150 16:11:17 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:25:24.150 16:11:17 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:25:24.150 16:11:17 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:25:24.150 16:11:17 -- spdk/autopackage.sh@19 -- $ timing_finish 00:25:24.150 16:11:17 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:24.150 16:11:17 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:25:24.150 16:11:17 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:24.150 16:11:17 -- spdk/autopackage.sh@20 -- $ exit 0 00:25:24.150 16:11:17 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:25:24.150 16:11:17 -- pm/common@29 -- $ signal_monitor_resources TERM 00:25:24.150 16:11:17 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:25:24.150 16:11:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:24.150 16:11:17 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:25:24.150 16:11:17 -- pm/common@44 -- $ pid=102987 00:25:24.150 16:11:17 -- pm/common@50 -- $ kill -TERM 102987 00:25:24.150 16:11:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:24.150 16:11:17 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:25:24.150 16:11:17 -- pm/common@44 -- $ pid=102989 00:25:24.150 16:11:17 -- pm/common@50 -- $ kill -TERM 102989 00:25:24.150 + [[ -n 5260 ]] 00:25:24.150 + sudo kill 5260 00:25:24.159 [Pipeline] } 00:25:24.180 [Pipeline] // timeout 00:25:24.185 [Pipeline] } 00:25:24.201 [Pipeline] // stage 00:25:24.207 [Pipeline] } 00:25:24.220 [Pipeline] // catchError 00:25:24.228 [Pipeline] stage 00:25:24.230 [Pipeline] { (Stop VM) 00:25:24.243 [Pipeline] sh 00:25:24.519 + vagrant halt 00:25:28.719 ==> default: Halting domain... 00:25:35.281 [Pipeline] sh 00:25:35.558 + vagrant destroy -f 00:25:39.739 ==> default: Removing domain... 00:25:39.748 [Pipeline] sh 00:25:40.021 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:25:40.030 [Pipeline] } 00:25:40.048 [Pipeline] // stage 00:25:40.053 [Pipeline] } 00:25:40.070 [Pipeline] // dir 00:25:40.075 [Pipeline] } 00:25:40.091 [Pipeline] // wrap 00:25:40.097 [Pipeline] } 00:25:40.112 [Pipeline] // catchError 00:25:40.121 [Pipeline] stage 00:25:40.123 [Pipeline] { (Epilogue) 00:25:40.137 [Pipeline] sh 00:25:40.415 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:25:47.009 [Pipeline] catchError 00:25:47.012 [Pipeline] { 00:25:47.028 [Pipeline] sh 00:25:47.307 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:25:47.307 Artifacts sizes are good 00:25:47.316 [Pipeline] } 00:25:47.332 [Pipeline] // catchError 00:25:47.343 [Pipeline] archiveArtifacts 00:25:47.349 Archiving artifacts 00:25:47.539 [Pipeline] cleanWs 00:25:47.550 [WS-CLEANUP] Deleting project workspace... 00:25:47.550 [WS-CLEANUP] Deferred wipeout is used... 00:25:47.556 [WS-CLEANUP] done 00:25:47.557 [Pipeline] } 00:25:47.575 [Pipeline] // stage 00:25:47.581 [Pipeline] } 00:25:47.598 [Pipeline] // node 00:25:47.605 [Pipeline] End of Pipeline 00:25:47.739 Finished: SUCCESS